00:00:00.001 Started by upstream project "autotest-per-patch" build number 121028 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.028 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.028 The recommended git tool is: git 00:00:00.028 using credential 00000000-0000-0000-0000-000000000002 00:00:00.030 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.044 Fetching changes from the remote Git repository 00:00:00.045 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.065 Using shallow fetch with depth 1 00:00:00.065 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.065 > git --version # timeout=10 00:00:00.091 > git --version # 'git version 2.39.2' 00:00:00.091 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.092 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.092 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.151 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.160 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.171 Checking out Revision 6e1fadd1eee50389429f9abb33dde5face8ca717 (FETCH_HEAD) 00:00:02.171 > git config core.sparsecheckout # timeout=10 00:00:02.180 > git read-tree -mu HEAD # timeout=10 00:00:02.195 > git checkout -f 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=5 00:00:02.211 Commit message: "pool: attach build logs for failed merge builds" 00:00:02.212 > git rev-list --no-walk 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=10 00:00:02.290 [Pipeline] Start of Pipeline 00:00:02.301 [Pipeline] library 00:00:02.302 Loading library shm_lib@master 00:00:02.302 Library shm_lib@master is cached. Copying from home. 00:00:02.315 [Pipeline] node 00:00:02.320 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:02.324 [Pipeline] { 00:00:02.335 [Pipeline] catchError 00:00:02.336 [Pipeline] { 00:00:02.344 [Pipeline] wrap 00:00:02.350 [Pipeline] { 00:00:02.355 [Pipeline] stage 00:00:02.356 [Pipeline] { (Prologue) 00:00:02.512 [Pipeline] sh 00:00:02.804 + logger -p user.info -t JENKINS-CI 00:00:02.821 [Pipeline] echo 00:00:02.822 Node: GP11 00:00:02.827 [Pipeline] sh 00:00:03.124 [Pipeline] setCustomBuildProperty 00:00:03.131 [Pipeline] echo 00:00:03.132 Cleanup processes 00:00:03.135 [Pipeline] sh 00:00:03.420 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.420 2394486 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.434 [Pipeline] sh 00:00:03.721 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.721 ++ grep -v 'sudo pgrep' 00:00:03.721 ++ awk '{print $1}' 00:00:03.721 + sudo kill -9 00:00:03.721 + true 00:00:03.735 [Pipeline] cleanWs 00:00:03.745 [WS-CLEANUP] Deleting project workspace... 00:00:03.745 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.752 [WS-CLEANUP] done 00:00:03.756 [Pipeline] setCustomBuildProperty 00:00:03.772 [Pipeline] sh 00:00:04.058 + sudo git config --global --replace-all safe.directory '*' 00:00:04.127 [Pipeline] nodesByLabel 00:00:04.128 Found a total of 1 nodes with the 'sorcerer' label 00:00:04.137 [Pipeline] httpRequest 00:00:04.142 HttpMethod: GET 00:00:04.142 URL: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:04.149 Sending request to url: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:04.153 Response Code: HTTP/1.1 200 OK 00:00:04.153 Success: Status code 200 is in the accepted range: 200,404 00:00:04.153 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:04.432 [Pipeline] sh 00:00:04.722 + tar --no-same-owner -xf jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:04.739 [Pipeline] httpRequest 00:00:04.742 HttpMethod: GET 00:00:04.743 URL: http://10.211.164.96/packages/spdk_dd57ed3e88dcafd6e7188cca8ba5f8d9254a85a1.tar.gz 00:00:04.745 Sending request to url: http://10.211.164.96/packages/spdk_dd57ed3e88dcafd6e7188cca8ba5f8d9254a85a1.tar.gz 00:00:04.748 Response Code: HTTP/1.1 200 OK 00:00:04.748 Success: Status code 200 is in the accepted range: 200,404 00:00:04.748 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_dd57ed3e88dcafd6e7188cca8ba5f8d9254a85a1.tar.gz 00:00:22.835 [Pipeline] sh 00:00:23.117 + tar --no-same-owner -xf spdk_dd57ed3e88dcafd6e7188cca8ba5f8d9254a85a1.tar.gz 00:00:25.662 [Pipeline] sh 00:00:25.961 + git -C spdk log --oneline -n5 00:00:25.961 dd57ed3e8 sma: add listener check on vfio device creation 00:00:25.961 d36d2b7e8 doc: mark adrfam as optional 00:00:25.961 129e6ba3b test/nvmf: add missing remove listener discovery 00:00:25.961 38dca48f0 libvfio-user: update submodule to point to `spdk` branch 00:00:25.961 7a71abf69 fuzz/llvm_vfio_fuzz: limit length of generated data to `bytes_per_cmd` 00:00:25.977 [Pipeline] } 00:00:25.994 [Pipeline] // stage 00:00:26.003 [Pipeline] stage 00:00:26.005 [Pipeline] { (Prepare) 00:00:26.023 [Pipeline] writeFile 00:00:26.040 [Pipeline] sh 00:00:26.322 + logger -p user.info -t JENKINS-CI 00:00:26.336 [Pipeline] sh 00:00:26.618 + logger -p user.info -t JENKINS-CI 00:00:26.630 [Pipeline] sh 00:00:26.911 + cat autorun-spdk.conf 00:00:26.911 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.911 SPDK_TEST_NVMF=1 00:00:26.911 SPDK_TEST_NVME_CLI=1 00:00:26.911 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:26.911 SPDK_TEST_NVMF_NICS=e810 00:00:26.911 SPDK_TEST_VFIOUSER=1 00:00:26.911 SPDK_RUN_UBSAN=1 00:00:26.911 NET_TYPE=phy 00:00:26.919 RUN_NIGHTLY=0 00:00:26.924 [Pipeline] readFile 00:00:26.946 [Pipeline] withEnv 00:00:26.949 [Pipeline] { 00:00:26.963 [Pipeline] sh 00:00:27.243 + set -ex 00:00:27.243 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:27.243 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:27.243 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:27.243 ++ SPDK_TEST_NVMF=1 00:00:27.243 ++ SPDK_TEST_NVME_CLI=1 00:00:27.243 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:27.243 ++ SPDK_TEST_NVMF_NICS=e810 00:00:27.243 ++ SPDK_TEST_VFIOUSER=1 00:00:27.243 ++ SPDK_RUN_UBSAN=1 00:00:27.243 ++ NET_TYPE=phy 00:00:27.243 ++ RUN_NIGHTLY=0 00:00:27.243 + case $SPDK_TEST_NVMF_NICS in 00:00:27.243 + DRIVERS=ice 00:00:27.243 + [[ tcp == \r\d\m\a ]] 00:00:27.243 + [[ -n ice ]] 00:00:27.243 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:27.243 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:27.243 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:27.243 rmmod: ERROR: Module irdma is not currently loaded 00:00:27.243 rmmod: ERROR: Module i40iw is not currently loaded 00:00:27.243 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:27.243 + true 00:00:27.243 + for D in $DRIVERS 00:00:27.243 + sudo modprobe ice 00:00:27.243 + exit 0 00:00:27.253 [Pipeline] } 00:00:27.265 [Pipeline] // withEnv 00:00:27.269 [Pipeline] } 00:00:27.283 [Pipeline] // stage 00:00:27.292 [Pipeline] catchError 00:00:27.293 [Pipeline] { 00:00:27.309 [Pipeline] timeout 00:00:27.309 Timeout set to expire in 40 min 00:00:27.312 [Pipeline] { 00:00:27.329 [Pipeline] stage 00:00:27.331 [Pipeline] { (Tests) 00:00:27.347 [Pipeline] sh 00:00:27.629 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:27.629 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:27.629 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:27.629 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:27.629 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:27.629 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:27.629 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:27.629 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:27.629 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:27.629 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:27.629 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:27.629 + source /etc/os-release 00:00:27.629 ++ NAME='Fedora Linux' 00:00:27.629 ++ VERSION='38 (Cloud Edition)' 00:00:27.629 ++ ID=fedora 00:00:27.629 ++ VERSION_ID=38 00:00:27.629 ++ VERSION_CODENAME= 00:00:27.629 ++ PLATFORM_ID=platform:f38 00:00:27.629 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:27.629 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:27.629 ++ LOGO=fedora-logo-icon 00:00:27.629 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:27.629 ++ HOME_URL=https://fedoraproject.org/ 00:00:27.629 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:27.629 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:27.629 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:27.629 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:27.629 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:27.629 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:27.629 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:27.629 ++ SUPPORT_END=2024-05-14 00:00:27.629 ++ VARIANT='Cloud Edition' 00:00:27.629 ++ VARIANT_ID=cloud 00:00:27.629 + uname -a 00:00:27.629 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:27.629 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:28.569 Hugepages 00:00:28.569 node hugesize free / total 00:00:28.569 node0 1048576kB 0 / 0 00:00:28.570 node0 2048kB 0 / 0 00:00:28.570 node1 1048576kB 0 / 0 00:00:28.570 node1 2048kB 0 / 0 00:00:28.570 00:00:28.570 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:28.570 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:28.570 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:28.570 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:28.570 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:28.570 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:28.570 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:28.570 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:28.570 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:28.570 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:28.570 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:28.570 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:28.570 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:28.570 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:28.570 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:28.570 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:28.570 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:28.570 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:28.570 + rm -f /tmp/spdk-ld-path 00:00:28.570 + source autorun-spdk.conf 00:00:28.570 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.570 ++ SPDK_TEST_NVMF=1 00:00:28.570 ++ SPDK_TEST_NVME_CLI=1 00:00:28.570 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:28.570 ++ SPDK_TEST_NVMF_NICS=e810 00:00:28.570 ++ SPDK_TEST_VFIOUSER=1 00:00:28.570 ++ SPDK_RUN_UBSAN=1 00:00:28.570 ++ NET_TYPE=phy 00:00:28.570 ++ RUN_NIGHTLY=0 00:00:28.570 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:28.570 + [[ -n '' ]] 00:00:28.570 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:28.570 + for M in /var/spdk/build-*-manifest.txt 00:00:28.570 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:28.570 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:28.570 + for M in /var/spdk/build-*-manifest.txt 00:00:28.570 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:28.570 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:28.570 ++ uname 00:00:28.570 + [[ Linux == \L\i\n\u\x ]] 00:00:28.570 + sudo dmesg -T 00:00:28.829 + sudo dmesg --clear 00:00:28.829 + dmesg_pid=2395152 00:00:28.829 + [[ Fedora Linux == FreeBSD ]] 00:00:28.829 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:28.829 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:28.829 + sudo dmesg -Tw 00:00:28.829 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:28.830 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:28.830 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:28.830 + [[ -x /usr/src/fio-static/fio ]] 00:00:28.830 + export FIO_BIN=/usr/src/fio-static/fio 00:00:28.830 + FIO_BIN=/usr/src/fio-static/fio 00:00:28.830 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:28.830 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:28.830 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:28.830 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:28.830 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:28.830 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:28.830 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:28.830 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:28.830 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:28.830 Test configuration: 00:00:28.830 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.830 SPDK_TEST_NVMF=1 00:00:28.830 SPDK_TEST_NVME_CLI=1 00:00:28.830 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:28.830 SPDK_TEST_NVMF_NICS=e810 00:00:28.830 SPDK_TEST_VFIOUSER=1 00:00:28.830 SPDK_RUN_UBSAN=1 00:00:28.830 NET_TYPE=phy 00:00:28.830 RUN_NIGHTLY=0 21:14:54 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:28.830 21:14:54 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:28.830 21:14:54 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:28.830 21:14:54 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:28.830 21:14:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:28.830 21:14:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:28.830 21:14:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:28.830 21:14:54 -- paths/export.sh@5 -- $ export PATH 00:00:28.830 21:14:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:28.830 21:14:54 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:28.830 21:14:54 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:28.830 21:14:54 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713986094.XXXXXX 00:00:28.830 21:14:54 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713986094.hNWzGg 00:00:28.830 21:14:54 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:28.830 21:14:54 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:28.830 21:14:54 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:28.830 21:14:54 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:28.830 21:14:54 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:28.830 21:14:54 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:28.830 21:14:54 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:28.830 21:14:54 -- common/autotest_common.sh@10 -- $ set +x 00:00:28.830 21:14:54 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:28.830 21:14:54 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:28.830 21:14:54 -- pm/common@17 -- $ local monitor 00:00:28.830 21:14:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:28.830 21:14:54 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2395186 00:00:28.830 21:14:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:28.830 21:14:54 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2395188 00:00:28.830 21:14:54 -- pm/common@21 -- $ date +%s 00:00:28.830 21:14:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:28.830 21:14:54 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2395190 00:00:28.830 21:14:54 -- pm/common@21 -- $ date +%s 00:00:28.830 21:14:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:28.830 21:14:54 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2395194 00:00:28.830 21:14:54 -- pm/common@26 -- $ sleep 1 00:00:28.830 21:14:54 -- pm/common@21 -- $ date +%s 00:00:28.830 21:14:54 -- pm/common@21 -- $ date +%s 00:00:28.830 21:14:54 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713986094 00:00:28.830 21:14:54 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713986094 00:00:28.830 21:14:54 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713986094 00:00:28.830 21:14:54 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713986094 00:00:28.830 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713986094_collect-vmstat.pm.log 00:00:28.830 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713986094_collect-bmc-pm.bmc.pm.log 00:00:28.830 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713986094_collect-cpu-load.pm.log 00:00:28.830 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713986094_collect-cpu-temp.pm.log 00:00:29.805 21:14:55 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:29.805 21:14:55 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:29.805 21:14:55 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:29.805 21:14:55 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:29.805 21:14:55 -- spdk/autobuild.sh@16 -- $ date -u 00:00:29.805 Wed Apr 24 07:14:55 PM UTC 2024 00:00:29.805 21:14:55 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:29.805 v24.05-pre-413-gdd57ed3e8 00:00:29.805 21:14:55 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:29.805 21:14:55 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:29.805 21:14:55 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:29.805 21:14:55 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:29.805 21:14:55 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:29.805 21:14:55 -- common/autotest_common.sh@10 -- $ set +x 00:00:29.805 ************************************ 00:00:29.805 START TEST ubsan 00:00:29.805 ************************************ 00:00:29.805 21:14:55 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:29.805 using ubsan 00:00:29.805 00:00:29.805 real 0m0.000s 00:00:29.805 user 0m0.000s 00:00:29.805 sys 0m0.000s 00:00:29.805 21:14:55 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:29.805 21:14:55 -- common/autotest_common.sh@10 -- $ set +x 00:00:29.805 ************************************ 00:00:29.805 END TEST ubsan 00:00:29.805 ************************************ 00:00:30.064 21:14:55 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:30.064 21:14:55 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:30.064 21:14:55 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:30.064 21:14:55 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:30.064 21:14:55 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:30.064 21:14:55 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:30.064 21:14:55 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:30.064 21:14:55 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:30.064 21:14:55 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:30.064 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:30.064 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:30.323 Using 'verbs' RDMA provider 00:00:40.911 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:50.895 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:50.895 Creating mk/config.mk...done. 00:00:50.895 Creating mk/cc.flags.mk...done. 00:00:50.895 Type 'make' to build. 00:00:50.895 21:15:15 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:00:50.895 21:15:15 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:50.895 21:15:15 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:50.895 21:15:15 -- common/autotest_common.sh@10 -- $ set +x 00:00:50.895 ************************************ 00:00:50.895 START TEST make 00:00:50.895 ************************************ 00:00:50.895 21:15:15 -- common/autotest_common.sh@1111 -- $ make -j48 00:00:50.895 make[1]: Nothing to be done for 'all'. 00:00:52.279 The Meson build system 00:00:52.279 Version: 1.3.1 00:00:52.279 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:00:52.279 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:52.279 Build type: native build 00:00:52.279 Project name: libvfio-user 00:00:52.279 Project version: 0.0.1 00:00:52.279 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:52.279 C linker for the host machine: cc ld.bfd 2.39-16 00:00:52.279 Host machine cpu family: x86_64 00:00:52.279 Host machine cpu: x86_64 00:00:52.279 Run-time dependency threads found: YES 00:00:52.279 Library dl found: YES 00:00:52.279 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:52.279 Run-time dependency json-c found: YES 0.17 00:00:52.279 Run-time dependency cmocka found: YES 1.1.7 00:00:52.279 Program pytest-3 found: NO 00:00:52.279 Program flake8 found: NO 00:00:52.279 Program misspell-fixer found: NO 00:00:52.279 Program restructuredtext-lint found: NO 00:00:52.279 Program valgrind found: YES (/usr/bin/valgrind) 00:00:52.279 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:52.279 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:52.279 Compiler for C supports arguments -Wwrite-strings: YES 00:00:52.279 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:52.279 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:00:52.279 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:00:52.279 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:52.279 Build targets in project: 8 00:00:52.279 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:00:52.279 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:00:52.279 00:00:52.279 libvfio-user 0.0.1 00:00:52.279 00:00:52.279 User defined options 00:00:52.279 buildtype : debug 00:00:52.279 default_library: shared 00:00:52.279 libdir : /usr/local/lib 00:00:52.279 00:00:52.279 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:52.854 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:53.116 [1/37] Compiling C object samples/null.p/null.c.o 00:00:53.116 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:00:53.116 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:00:53.116 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:00:53.116 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:00:53.116 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:00:53.116 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:00:53.116 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:00:53.116 [9/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:00:53.116 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:00:53.116 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:00:53.116 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:00:53.116 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:00:53.116 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:00:53.378 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:00:53.378 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:00:53.378 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:00:53.378 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:00:53.378 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:00:53.378 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:00:53.378 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:00:53.378 [22/37] Compiling C object samples/server.p/server.c.o 00:00:53.378 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:00:53.378 [24/37] Compiling C object samples/client.p/client.c.o 00:00:53.378 [25/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:00:53.378 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:00:53.378 [27/37] Linking target samples/client 00:00:53.378 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:00:53.378 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:00:53.640 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:00:53.640 [31/37] Linking target test/unit_tests 00:00:53.640 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:00:53.640 [33/37] Linking target samples/null 00:00:53.640 [34/37] Linking target samples/server 00:00:53.640 [35/37] Linking target samples/shadow_ioeventfd_server 00:00:53.640 [36/37] Linking target samples/gpio-pci-idio-16 00:00:53.640 [37/37] Linking target samples/lspci 00:00:53.902 INFO: autodetecting backend as ninja 00:00:53.902 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:53.902 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:54.477 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:54.477 ninja: no work to do. 00:00:59.753 The Meson build system 00:00:59.753 Version: 1.3.1 00:00:59.753 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:00:59.753 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:00:59.753 Build type: native build 00:00:59.753 Program cat found: YES (/usr/bin/cat) 00:00:59.753 Project name: DPDK 00:00:59.753 Project version: 23.11.0 00:00:59.753 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:59.753 C linker for the host machine: cc ld.bfd 2.39-16 00:00:59.753 Host machine cpu family: x86_64 00:00:59.753 Host machine cpu: x86_64 00:00:59.753 Message: ## Building in Developer Mode ## 00:00:59.753 Program pkg-config found: YES (/usr/bin/pkg-config) 00:00:59.753 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:00:59.753 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:00:59.753 Program python3 found: YES (/usr/bin/python3) 00:00:59.753 Program cat found: YES (/usr/bin/cat) 00:00:59.753 Compiler for C supports arguments -march=native: YES 00:00:59.753 Checking for size of "void *" : 8 00:00:59.753 Checking for size of "void *" : 8 (cached) 00:00:59.753 Library m found: YES 00:00:59.753 Library numa found: YES 00:00:59.753 Has header "numaif.h" : YES 00:00:59.753 Library fdt found: NO 00:00:59.753 Library execinfo found: NO 00:00:59.753 Has header "execinfo.h" : YES 00:00:59.753 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:59.753 Run-time dependency libarchive found: NO (tried pkgconfig) 00:00:59.753 Run-time dependency libbsd found: NO (tried pkgconfig) 00:00:59.753 Run-time dependency jansson found: NO (tried pkgconfig) 00:00:59.753 Run-time dependency openssl found: YES 3.0.9 00:00:59.753 Run-time dependency libpcap found: YES 1.10.4 00:00:59.753 Has header "pcap.h" with dependency libpcap: YES 00:00:59.753 Compiler for C supports arguments -Wcast-qual: YES 00:00:59.753 Compiler for C supports arguments -Wdeprecated: YES 00:00:59.753 Compiler for C supports arguments -Wformat: YES 00:00:59.753 Compiler for C supports arguments -Wformat-nonliteral: NO 00:00:59.753 Compiler for C supports arguments -Wformat-security: NO 00:00:59.753 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:59.753 Compiler for C supports arguments -Wmissing-prototypes: YES 00:00:59.753 Compiler for C supports arguments -Wnested-externs: YES 00:00:59.753 Compiler for C supports arguments -Wold-style-definition: YES 00:00:59.753 Compiler for C supports arguments -Wpointer-arith: YES 00:00:59.753 Compiler for C supports arguments -Wsign-compare: YES 00:00:59.753 Compiler for C supports arguments -Wstrict-prototypes: YES 00:00:59.753 Compiler for C supports arguments -Wundef: YES 00:00:59.753 Compiler for C supports arguments -Wwrite-strings: YES 00:00:59.753 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:00:59.753 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:00:59.753 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:59.753 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:00:59.753 Program objdump found: YES (/usr/bin/objdump) 00:00:59.753 Compiler for C supports arguments -mavx512f: YES 00:00:59.753 Checking if "AVX512 checking" compiles: YES 00:00:59.753 Fetching value of define "__SSE4_2__" : 1 00:00:59.753 Fetching value of define "__AES__" : 1 00:00:59.753 Fetching value of define "__AVX__" : 1 00:00:59.753 Fetching value of define "__AVX2__" : (undefined) 00:00:59.753 Fetching value of define "__AVX512BW__" : (undefined) 00:00:59.753 Fetching value of define "__AVX512CD__" : (undefined) 00:00:59.753 Fetching value of define "__AVX512DQ__" : (undefined) 00:00:59.753 Fetching value of define "__AVX512F__" : (undefined) 00:00:59.753 Fetching value of define "__AVX512VL__" : (undefined) 00:00:59.753 Fetching value of define "__PCLMUL__" : 1 00:00:59.753 Fetching value of define "__RDRND__" : 1 00:00:59.753 Fetching value of define "__RDSEED__" : (undefined) 00:00:59.753 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:00:59.753 Fetching value of define "__znver1__" : (undefined) 00:00:59.753 Fetching value of define "__znver2__" : (undefined) 00:00:59.753 Fetching value of define "__znver3__" : (undefined) 00:00:59.753 Fetching value of define "__znver4__" : (undefined) 00:00:59.753 Compiler for C supports arguments -Wno-format-truncation: YES 00:00:59.753 Message: lib/log: Defining dependency "log" 00:00:59.753 Message: lib/kvargs: Defining dependency "kvargs" 00:00:59.753 Message: lib/telemetry: Defining dependency "telemetry" 00:00:59.753 Checking for function "getentropy" : NO 00:00:59.753 Message: lib/eal: Defining dependency "eal" 00:00:59.753 Message: lib/ring: Defining dependency "ring" 00:00:59.753 Message: lib/rcu: Defining dependency "rcu" 00:00:59.753 Message: lib/mempool: Defining dependency "mempool" 00:00:59.753 Message: lib/mbuf: Defining dependency "mbuf" 00:00:59.753 Fetching value of define "__PCLMUL__" : 1 (cached) 00:00:59.753 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:59.753 Compiler for C supports arguments -mpclmul: YES 00:00:59.753 Compiler for C supports arguments -maes: YES 00:00:59.753 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:59.753 Compiler for C supports arguments -mavx512bw: YES 00:00:59.753 Compiler for C supports arguments -mavx512dq: YES 00:00:59.753 Compiler for C supports arguments -mavx512vl: YES 00:00:59.753 Compiler for C supports arguments -mvpclmulqdq: YES 00:00:59.753 Compiler for C supports arguments -mavx2: YES 00:00:59.753 Compiler for C supports arguments -mavx: YES 00:00:59.753 Message: lib/net: Defining dependency "net" 00:00:59.753 Message: lib/meter: Defining dependency "meter" 00:00:59.753 Message: lib/ethdev: Defining dependency "ethdev" 00:00:59.753 Message: lib/pci: Defining dependency "pci" 00:00:59.753 Message: lib/cmdline: Defining dependency "cmdline" 00:00:59.753 Message: lib/hash: Defining dependency "hash" 00:00:59.753 Message: lib/timer: Defining dependency "timer" 00:00:59.753 Message: lib/compressdev: Defining dependency "compressdev" 00:00:59.753 Message: lib/cryptodev: Defining dependency "cryptodev" 00:00:59.753 Message: lib/dmadev: Defining dependency "dmadev" 00:00:59.753 Compiler for C supports arguments -Wno-cast-qual: YES 00:00:59.753 Message: lib/power: Defining dependency "power" 00:00:59.753 Message: lib/reorder: Defining dependency "reorder" 00:00:59.753 Message: lib/security: Defining dependency "security" 00:00:59.753 Has header "linux/userfaultfd.h" : YES 00:00:59.753 Has header "linux/vduse.h" : YES 00:00:59.753 Message: lib/vhost: Defining dependency "vhost" 00:00:59.753 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:00:59.753 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:00:59.753 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:00:59.753 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:00:59.753 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:00:59.753 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:00:59.753 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:00:59.753 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:00:59.753 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:00:59.753 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:00:59.753 Program doxygen found: YES (/usr/bin/doxygen) 00:00:59.753 Configuring doxy-api-html.conf using configuration 00:00:59.753 Configuring doxy-api-man.conf using configuration 00:00:59.753 Program mandb found: YES (/usr/bin/mandb) 00:00:59.753 Program sphinx-build found: NO 00:00:59.753 Configuring rte_build_config.h using configuration 00:00:59.753 Message: 00:00:59.753 ================= 00:00:59.753 Applications Enabled 00:00:59.753 ================= 00:00:59.753 00:00:59.753 apps: 00:00:59.753 00:00:59.753 00:00:59.753 Message: 00:00:59.753 ================= 00:00:59.753 Libraries Enabled 00:00:59.753 ================= 00:00:59.753 00:00:59.753 libs: 00:00:59.753 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:00:59.753 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:00:59.753 cryptodev, dmadev, power, reorder, security, vhost, 00:00:59.753 00:00:59.753 Message: 00:00:59.753 =============== 00:00:59.753 Drivers Enabled 00:00:59.753 =============== 00:00:59.753 00:00:59.753 common: 00:00:59.753 00:00:59.753 bus: 00:00:59.753 pci, vdev, 00:00:59.753 mempool: 00:00:59.753 ring, 00:00:59.753 dma: 00:00:59.753 00:00:59.753 net: 00:00:59.753 00:00:59.753 crypto: 00:00:59.753 00:00:59.753 compress: 00:00:59.753 00:00:59.753 vdpa: 00:00:59.753 00:00:59.753 00:00:59.753 Message: 00:00:59.753 ================= 00:00:59.753 Content Skipped 00:00:59.753 ================= 00:00:59.753 00:00:59.753 apps: 00:00:59.753 dumpcap: explicitly disabled via build config 00:00:59.753 graph: explicitly disabled via build config 00:00:59.753 pdump: explicitly disabled via build config 00:00:59.753 proc-info: explicitly disabled via build config 00:00:59.753 test-acl: explicitly disabled via build config 00:00:59.753 test-bbdev: explicitly disabled via build config 00:00:59.753 test-cmdline: explicitly disabled via build config 00:00:59.753 test-compress-perf: explicitly disabled via build config 00:00:59.753 test-crypto-perf: explicitly disabled via build config 00:00:59.753 test-dma-perf: explicitly disabled via build config 00:00:59.753 test-eventdev: explicitly disabled via build config 00:00:59.753 test-fib: explicitly disabled via build config 00:00:59.753 test-flow-perf: explicitly disabled via build config 00:00:59.753 test-gpudev: explicitly disabled via build config 00:00:59.753 test-mldev: explicitly disabled via build config 00:00:59.753 test-pipeline: explicitly disabled via build config 00:00:59.753 test-pmd: explicitly disabled via build config 00:00:59.753 test-regex: explicitly disabled via build config 00:00:59.753 test-sad: explicitly disabled via build config 00:00:59.753 test-security-perf: explicitly disabled via build config 00:00:59.753 00:00:59.753 libs: 00:00:59.753 metrics: explicitly disabled via build config 00:00:59.754 acl: explicitly disabled via build config 00:00:59.754 bbdev: explicitly disabled via build config 00:00:59.754 bitratestats: explicitly disabled via build config 00:00:59.754 bpf: explicitly disabled via build config 00:00:59.754 cfgfile: explicitly disabled via build config 00:00:59.754 distributor: explicitly disabled via build config 00:00:59.754 efd: explicitly disabled via build config 00:00:59.754 eventdev: explicitly disabled via build config 00:00:59.754 dispatcher: explicitly disabled via build config 00:00:59.754 gpudev: explicitly disabled via build config 00:00:59.754 gro: explicitly disabled via build config 00:00:59.754 gso: explicitly disabled via build config 00:00:59.754 ip_frag: explicitly disabled via build config 00:00:59.754 jobstats: explicitly disabled via build config 00:00:59.754 latencystats: explicitly disabled via build config 00:00:59.754 lpm: explicitly disabled via build config 00:00:59.754 member: explicitly disabled via build config 00:00:59.754 pcapng: explicitly disabled via build config 00:00:59.754 rawdev: explicitly disabled via build config 00:00:59.754 regexdev: explicitly disabled via build config 00:00:59.754 mldev: explicitly disabled via build config 00:00:59.754 rib: explicitly disabled via build config 00:00:59.754 sched: explicitly disabled via build config 00:00:59.754 stack: explicitly disabled via build config 00:00:59.754 ipsec: explicitly disabled via build config 00:00:59.754 pdcp: explicitly disabled via build config 00:00:59.754 fib: explicitly disabled via build config 00:00:59.754 port: explicitly disabled via build config 00:00:59.754 pdump: explicitly disabled via build config 00:00:59.754 table: explicitly disabled via build config 00:00:59.754 pipeline: explicitly disabled via build config 00:00:59.754 graph: explicitly disabled via build config 00:00:59.754 node: explicitly disabled via build config 00:00:59.754 00:00:59.754 drivers: 00:00:59.754 common/cpt: not in enabled drivers build config 00:00:59.754 common/dpaax: not in enabled drivers build config 00:00:59.754 common/iavf: not in enabled drivers build config 00:00:59.754 common/idpf: not in enabled drivers build config 00:00:59.754 common/mvep: not in enabled drivers build config 00:00:59.754 common/octeontx: not in enabled drivers build config 00:00:59.754 bus/auxiliary: not in enabled drivers build config 00:00:59.754 bus/cdx: not in enabled drivers build config 00:00:59.754 bus/dpaa: not in enabled drivers build config 00:00:59.754 bus/fslmc: not in enabled drivers build config 00:00:59.754 bus/ifpga: not in enabled drivers build config 00:00:59.754 bus/platform: not in enabled drivers build config 00:00:59.754 bus/vmbus: not in enabled drivers build config 00:00:59.754 common/cnxk: not in enabled drivers build config 00:00:59.754 common/mlx5: not in enabled drivers build config 00:00:59.754 common/nfp: not in enabled drivers build config 00:00:59.754 common/qat: not in enabled drivers build config 00:00:59.754 common/sfc_efx: not in enabled drivers build config 00:00:59.754 mempool/bucket: not in enabled drivers build config 00:00:59.754 mempool/cnxk: not in enabled drivers build config 00:00:59.754 mempool/dpaa: not in enabled drivers build config 00:00:59.754 mempool/dpaa2: not in enabled drivers build config 00:00:59.754 mempool/octeontx: not in enabled drivers build config 00:00:59.754 mempool/stack: not in enabled drivers build config 00:00:59.754 dma/cnxk: not in enabled drivers build config 00:00:59.754 dma/dpaa: not in enabled drivers build config 00:00:59.754 dma/dpaa2: not in enabled drivers build config 00:00:59.754 dma/hisilicon: not in enabled drivers build config 00:00:59.754 dma/idxd: not in enabled drivers build config 00:00:59.754 dma/ioat: not in enabled drivers build config 00:00:59.754 dma/skeleton: not in enabled drivers build config 00:00:59.754 net/af_packet: not in enabled drivers build config 00:00:59.754 net/af_xdp: not in enabled drivers build config 00:00:59.754 net/ark: not in enabled drivers build config 00:00:59.754 net/atlantic: not in enabled drivers build config 00:00:59.754 net/avp: not in enabled drivers build config 00:00:59.754 net/axgbe: not in enabled drivers build config 00:00:59.754 net/bnx2x: not in enabled drivers build config 00:00:59.754 net/bnxt: not in enabled drivers build config 00:00:59.754 net/bonding: not in enabled drivers build config 00:00:59.754 net/cnxk: not in enabled drivers build config 00:00:59.754 net/cpfl: not in enabled drivers build config 00:00:59.754 net/cxgbe: not in enabled drivers build config 00:00:59.754 net/dpaa: not in enabled drivers build config 00:00:59.754 net/dpaa2: not in enabled drivers build config 00:00:59.754 net/e1000: not in enabled drivers build config 00:00:59.754 net/ena: not in enabled drivers build config 00:00:59.754 net/enetc: not in enabled drivers build config 00:00:59.754 net/enetfec: not in enabled drivers build config 00:00:59.754 net/enic: not in enabled drivers build config 00:00:59.754 net/failsafe: not in enabled drivers build config 00:00:59.754 net/fm10k: not in enabled drivers build config 00:00:59.754 net/gve: not in enabled drivers build config 00:00:59.754 net/hinic: not in enabled drivers build config 00:00:59.754 net/hns3: not in enabled drivers build config 00:00:59.754 net/i40e: not in enabled drivers build config 00:00:59.754 net/iavf: not in enabled drivers build config 00:00:59.754 net/ice: not in enabled drivers build config 00:00:59.754 net/idpf: not in enabled drivers build config 00:00:59.754 net/igc: not in enabled drivers build config 00:00:59.754 net/ionic: not in enabled drivers build config 00:00:59.754 net/ipn3ke: not in enabled drivers build config 00:00:59.754 net/ixgbe: not in enabled drivers build config 00:00:59.754 net/mana: not in enabled drivers build config 00:00:59.754 net/memif: not in enabled drivers build config 00:00:59.754 net/mlx4: not in enabled drivers build config 00:00:59.754 net/mlx5: not in enabled drivers build config 00:00:59.754 net/mvneta: not in enabled drivers build config 00:00:59.754 net/mvpp2: not in enabled drivers build config 00:00:59.754 net/netvsc: not in enabled drivers build config 00:00:59.754 net/nfb: not in enabled drivers build config 00:00:59.754 net/nfp: not in enabled drivers build config 00:00:59.754 net/ngbe: not in enabled drivers build config 00:00:59.754 net/null: not in enabled drivers build config 00:00:59.754 net/octeontx: not in enabled drivers build config 00:00:59.754 net/octeon_ep: not in enabled drivers build config 00:00:59.754 net/pcap: not in enabled drivers build config 00:00:59.754 net/pfe: not in enabled drivers build config 00:00:59.754 net/qede: not in enabled drivers build config 00:00:59.754 net/ring: not in enabled drivers build config 00:00:59.754 net/sfc: not in enabled drivers build config 00:00:59.754 net/softnic: not in enabled drivers build config 00:00:59.754 net/tap: not in enabled drivers build config 00:00:59.754 net/thunderx: not in enabled drivers build config 00:00:59.754 net/txgbe: not in enabled drivers build config 00:00:59.754 net/vdev_netvsc: not in enabled drivers build config 00:00:59.754 net/vhost: not in enabled drivers build config 00:00:59.754 net/virtio: not in enabled drivers build config 00:00:59.754 net/vmxnet3: not in enabled drivers build config 00:00:59.754 raw/*: missing internal dependency, "rawdev" 00:00:59.754 crypto/armv8: not in enabled drivers build config 00:00:59.754 crypto/bcmfs: not in enabled drivers build config 00:00:59.754 crypto/caam_jr: not in enabled drivers build config 00:00:59.754 crypto/ccp: not in enabled drivers build config 00:00:59.754 crypto/cnxk: not in enabled drivers build config 00:00:59.754 crypto/dpaa_sec: not in enabled drivers build config 00:00:59.754 crypto/dpaa2_sec: not in enabled drivers build config 00:00:59.754 crypto/ipsec_mb: not in enabled drivers build config 00:00:59.754 crypto/mlx5: not in enabled drivers build config 00:00:59.754 crypto/mvsam: not in enabled drivers build config 00:00:59.754 crypto/nitrox: not in enabled drivers build config 00:00:59.754 crypto/null: not in enabled drivers build config 00:00:59.754 crypto/octeontx: not in enabled drivers build config 00:00:59.754 crypto/openssl: not in enabled drivers build config 00:00:59.754 crypto/scheduler: not in enabled drivers build config 00:00:59.754 crypto/uadk: not in enabled drivers build config 00:00:59.754 crypto/virtio: not in enabled drivers build config 00:00:59.754 compress/isal: not in enabled drivers build config 00:00:59.754 compress/mlx5: not in enabled drivers build config 00:00:59.754 compress/octeontx: not in enabled drivers build config 00:00:59.754 compress/zlib: not in enabled drivers build config 00:00:59.754 regex/*: missing internal dependency, "regexdev" 00:00:59.754 ml/*: missing internal dependency, "mldev" 00:00:59.754 vdpa/ifc: not in enabled drivers build config 00:00:59.754 vdpa/mlx5: not in enabled drivers build config 00:00:59.754 vdpa/nfp: not in enabled drivers build config 00:00:59.754 vdpa/sfc: not in enabled drivers build config 00:00:59.754 event/*: missing internal dependency, "eventdev" 00:00:59.754 baseband/*: missing internal dependency, "bbdev" 00:00:59.754 gpu/*: missing internal dependency, "gpudev" 00:00:59.754 00:00:59.754 00:00:59.754 Build targets in project: 85 00:00:59.754 00:00:59.754 DPDK 23.11.0 00:00:59.754 00:00:59.754 User defined options 00:00:59.754 buildtype : debug 00:00:59.754 default_library : shared 00:00:59.754 libdir : lib 00:00:59.754 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:59.754 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:00:59.754 c_link_args : 00:00:59.754 cpu_instruction_set: native 00:00:59.754 disable_apps : test-acl,test-bbdev,test-crypto-perf,test-fib,test-pipeline,test-gpudev,test-flow-perf,pdump,dumpcap,test-sad,test-cmdline,test-eventdev,proc-info,test,test-dma-perf,test-pmd,test-mldev,test-compress-perf,test-security-perf,graph,test-regex 00:00:59.754 disable_libs : pipeline,member,eventdev,efd,bbdev,cfgfile,rib,sched,mldev,metrics,lpm,latencystats,pdump,pdcp,bpf,ipsec,fib,ip_frag,table,port,stack,gro,jobstats,regexdev,rawdev,pcapng,dispatcher,node,bitratestats,acl,gpudev,distributor,graph,gso 00:00:59.754 enable_docs : false 00:00:59.754 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:00:59.754 enable_kmods : false 00:00:59.754 tests : false 00:00:59.754 00:00:59.754 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:59.754 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:00:59.754 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:00:59.754 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:00:59.754 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:00:59.755 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:00:59.755 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:00:59.755 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:00:59.755 [7/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:00:59.755 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:00:59.755 [9/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:00:59.755 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:00:59.755 [11/265] Linking static target lib/librte_kvargs.a 00:00:59.755 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:00:59.755 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:00:59.755 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:00:59.755 [15/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:00:59.755 [16/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:00:59.755 [17/265] Linking static target lib/librte_log.a 00:01:00.016 [18/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:00.016 [19/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:00.016 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:00.016 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:00.275 [22/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.547 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:00.547 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:00.547 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:00.547 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:00.547 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:00.547 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:00.547 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:00.547 [30/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:00.547 [31/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:00.547 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:00.547 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:00.547 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:00.547 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:00.831 [36/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:00.831 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:00.831 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:00.831 [39/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:00.831 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:00.831 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:00.831 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:00.831 [43/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:00.831 [44/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:00.831 [45/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:00.831 [46/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:00.831 [47/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:00.831 [48/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:00.831 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:00.831 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:00.831 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:00.831 [52/265] Linking static target lib/librte_telemetry.a 00:01:00.831 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:00.831 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:00.831 [55/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:00.831 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:00.831 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:00.831 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:00.831 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:00.831 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:00.831 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:00.831 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:00.831 [63/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:00.831 [64/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:01.104 [65/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:01.104 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:01.104 [67/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:01.104 [68/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:01.104 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:01.104 [70/265] Linking static target lib/librte_pci.a 00:01:01.104 [71/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:01.104 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:01.104 [73/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.104 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:01.104 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:01.104 [76/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:01.104 [77/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:01.104 [78/265] Linking target lib/librte_log.so.24.0 00:01:01.367 [79/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:01.367 [80/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:01.367 [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:01.367 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:01.367 [83/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:01.367 [84/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:01.367 [85/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:01.367 [86/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:01.367 [87/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:01.367 [88/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:01.632 [89/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:01.632 [90/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:01.632 [91/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:01.632 [92/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:01.632 [93/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:01.632 [94/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:01.632 [95/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:01.632 [96/265] Linking target lib/librte_kvargs.so.24.0 00:01:01.632 [97/265] Linking static target lib/librte_ring.a 00:01:01.632 [98/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:01.632 [99/265] Linking static target lib/librte_eal.a 00:01:01.632 [100/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:01.632 [101/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.632 [102/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:01.632 [103/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:01.632 [104/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:01.632 [105/265] Linking static target lib/librte_meter.a 00:01:01.632 [106/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:01.632 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:01.891 [108/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:01.891 [109/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.891 [110/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:01.891 [111/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:01.891 [112/265] Linking target lib/librte_telemetry.so.24.0 00:01:01.891 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:01.891 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:01.891 [115/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:01.891 [116/265] Linking static target lib/librte_rcu.a 00:01:01.891 [117/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:01.891 [118/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:01.891 [119/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:01.891 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:01.891 [121/265] Linking static target lib/librte_mempool.a 00:01:01.891 [122/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:01.891 [123/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:02.153 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:02.153 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:02.153 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:02.153 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:02.153 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:02.153 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:02.153 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:02.153 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:02.153 [132/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:02.153 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:02.153 [134/265] Linking static target lib/librte_cmdline.a 00:01:02.153 [135/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:02.153 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:02.153 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:02.153 [138/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:02.414 [139/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.414 [140/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.414 [141/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:02.414 [142/265] Linking static target lib/librte_net.a 00:01:02.414 [143/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:02.414 [144/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:02.414 [145/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:02.414 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:02.414 [147/265] Linking static target lib/librte_timer.a 00:01:02.414 [148/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:02.414 [149/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.414 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:02.676 [151/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:02.676 [152/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:02.676 [153/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:02.676 [154/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.676 [155/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:02.676 [156/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:02.676 [157/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:02.676 [158/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:02.676 [159/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:02.676 [160/265] Linking static target lib/librte_dmadev.a 00:01:02.933 [161/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.933 [162/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:02.933 [163/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:02.933 [164/265] Linking static target lib/librte_compressdev.a 00:01:02.933 [165/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:02.933 [166/265] Linking static target lib/librte_hash.a 00:01:02.933 [167/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:02.933 [168/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.933 [169/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:02.933 [170/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:02.933 [171/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:02.933 [172/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:02.933 [173/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:02.933 [174/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:02.933 [175/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:02.933 [176/265] Linking static target lib/librte_power.a 00:01:02.933 [177/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:02.933 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:03.190 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:03.190 [180/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.190 [181/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.190 [182/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:03.190 [183/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:03.190 [184/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:03.190 [185/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:03.190 [186/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:03.190 [187/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:03.190 [188/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:03.190 [189/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:03.449 [190/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:03.449 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:03.449 [192/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:03.449 [193/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:03.449 [194/265] Linking static target lib/librte_mbuf.a 00:01:03.449 [195/265] Linking static target lib/librte_security.a 00:01:03.449 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:03.449 [197/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.449 [198/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:03.449 [199/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.449 [200/265] Linking static target lib/librte_reorder.a 00:01:03.449 [201/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:03.449 [202/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:03.449 [203/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:03.449 [204/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:03.449 [205/265] Linking static target drivers/librte_bus_vdev.a 00:01:03.449 [206/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:03.449 [207/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:03.449 [208/265] Linking static target drivers/librte_mempool_ring.a 00:01:03.449 [209/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:03.449 [210/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:03.449 [211/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:03.449 [212/265] Linking static target drivers/librte_bus_pci.a 00:01:03.449 [213/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.706 [214/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:03.706 [215/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.706 [216/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.706 [217/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.706 [218/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.706 [219/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:03.964 [220/265] Linking static target lib/librte_ethdev.a 00:01:03.964 [221/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:03.964 [222/265] Linking static target lib/librte_cryptodev.a 00:01:03.964 [223/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.897 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.270 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:07.643 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.907 [227/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.907 [228/265] Linking target lib/librte_eal.so.24.0 00:01:08.165 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:08.165 [230/265] Linking target lib/librte_ring.so.24.0 00:01:08.165 [231/265] Linking target lib/librte_pci.so.24.0 00:01:08.165 [232/265] Linking target lib/librte_meter.so.24.0 00:01:08.165 [233/265] Linking target lib/librte_dmadev.so.24.0 00:01:08.165 [234/265] Linking target lib/librte_timer.so.24.0 00:01:08.165 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:08.165 [236/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:08.165 [237/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:08.165 [238/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:08.165 [239/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:08.165 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:08.165 [241/265] Linking target lib/librte_rcu.so.24.0 00:01:08.165 [242/265] Linking target lib/librte_mempool.so.24.0 00:01:08.165 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:08.424 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:08.424 [245/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:08.424 [246/265] Linking target lib/librte_mbuf.so.24.0 00:01:08.424 [247/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:08.424 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:08.681 [249/265] Linking target lib/librte_compressdev.so.24.0 00:01:08.681 [250/265] Linking target lib/librte_net.so.24.0 00:01:08.681 [251/265] Linking target lib/librte_reorder.so.24.0 00:01:08.681 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:01:08.681 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:08.681 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:08.681 [255/265] Linking target lib/librte_hash.so.24.0 00:01:08.681 [256/265] Linking target lib/librte_cmdline.so.24.0 00:01:08.681 [257/265] Linking target lib/librte_security.so.24.0 00:01:08.681 [258/265] Linking target lib/librte_ethdev.so.24.0 00:01:08.939 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:08.939 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:08.939 [261/265] Linking target lib/librte_power.so.24.0 00:01:11.470 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:11.470 [263/265] Linking static target lib/librte_vhost.a 00:01:12.847 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.847 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:12.848 INFO: autodetecting backend as ninja 00:01:12.848 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:13.414 CC lib/log/log.o 00:01:13.414 CC lib/log/log_flags.o 00:01:13.414 CC lib/log/log_deprecated.o 00:01:13.414 CC lib/ut_mock/mock.o 00:01:13.414 CC lib/ut/ut.o 00:01:13.672 LIB libspdk_ut_mock.a 00:01:13.672 LIB libspdk_log.a 00:01:13.672 SO libspdk_ut_mock.so.6.0 00:01:13.672 LIB libspdk_ut.a 00:01:13.672 SO libspdk_ut.so.2.0 00:01:13.672 SO libspdk_log.so.7.0 00:01:13.672 SYMLINK libspdk_ut_mock.so 00:01:13.672 SYMLINK libspdk_ut.so 00:01:13.672 SYMLINK libspdk_log.so 00:01:13.930 CC lib/ioat/ioat.o 00:01:13.930 CC lib/dma/dma.o 00:01:13.930 CXX lib/trace_parser/trace.o 00:01:13.930 CC lib/util/base64.o 00:01:13.930 CC lib/util/bit_array.o 00:01:13.930 CC lib/util/cpuset.o 00:01:13.930 CC lib/util/crc16.o 00:01:13.930 CC lib/util/crc32.o 00:01:13.930 CC lib/util/crc32c.o 00:01:13.930 CC lib/util/crc32_ieee.o 00:01:13.930 CC lib/util/crc64.o 00:01:13.930 CC lib/util/dif.o 00:01:13.930 CC lib/util/fd.o 00:01:13.930 CC lib/util/file.o 00:01:13.930 CC lib/util/hexlify.o 00:01:13.930 CC lib/util/iov.o 00:01:13.930 CC lib/util/math.o 00:01:13.930 CC lib/util/pipe.o 00:01:13.930 CC lib/util/strerror_tls.o 00:01:13.930 CC lib/util/string.o 00:01:13.930 CC lib/util/uuid.o 00:01:13.930 CC lib/util/fd_group.o 00:01:13.930 CC lib/util/zipf.o 00:01:13.930 CC lib/util/xor.o 00:01:14.189 CC lib/vfio_user/host/vfio_user.o 00:01:14.189 CC lib/vfio_user/host/vfio_user_pci.o 00:01:14.189 LIB libspdk_dma.a 00:01:14.189 SO libspdk_dma.so.4.0 00:01:14.189 SYMLINK libspdk_dma.so 00:01:14.189 LIB libspdk_ioat.a 00:01:14.189 SO libspdk_ioat.so.7.0 00:01:14.447 SYMLINK libspdk_ioat.so 00:01:14.447 LIB libspdk_vfio_user.a 00:01:14.447 SO libspdk_vfio_user.so.5.0 00:01:14.447 SYMLINK libspdk_vfio_user.so 00:01:14.447 LIB libspdk_util.a 00:01:14.447 SO libspdk_util.so.9.0 00:01:14.705 SYMLINK libspdk_util.so 00:01:14.963 CC lib/json/json_parse.o 00:01:14.963 CC lib/conf/conf.o 00:01:14.963 CC lib/env_dpdk/env.o 00:01:14.963 CC lib/vmd/vmd.o 00:01:14.963 CC lib/idxd/idxd.o 00:01:14.963 CC lib/rdma/common.o 00:01:14.963 CC lib/json/json_util.o 00:01:14.963 CC lib/idxd/idxd_user.o 00:01:14.963 CC lib/env_dpdk/memory.o 00:01:14.963 CC lib/rdma/rdma_verbs.o 00:01:14.963 CC lib/vmd/led.o 00:01:14.963 CC lib/json/json_write.o 00:01:14.963 CC lib/env_dpdk/pci.o 00:01:14.963 CC lib/env_dpdk/init.o 00:01:14.963 CC lib/env_dpdk/threads.o 00:01:14.963 CC lib/env_dpdk/pci_ioat.o 00:01:14.963 CC lib/env_dpdk/pci_virtio.o 00:01:14.963 CC lib/env_dpdk/pci_vmd.o 00:01:14.963 CC lib/env_dpdk/pci_idxd.o 00:01:14.963 CC lib/env_dpdk/pci_event.o 00:01:14.963 CC lib/env_dpdk/sigbus_handler.o 00:01:14.963 CC lib/env_dpdk/pci_dpdk.o 00:01:14.963 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:14.963 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:14.963 LIB libspdk_trace_parser.a 00:01:15.222 SO libspdk_trace_parser.so.5.0 00:01:15.222 LIB libspdk_conf.a 00:01:15.222 SO libspdk_conf.so.6.0 00:01:15.222 LIB libspdk_json.a 00:01:15.222 LIB libspdk_rdma.a 00:01:15.222 SYMLINK libspdk_trace_parser.so 00:01:15.222 SYMLINK libspdk_conf.so 00:01:15.222 SO libspdk_rdma.so.6.0 00:01:15.222 SO libspdk_json.so.6.0 00:01:15.222 SYMLINK libspdk_rdma.so 00:01:15.222 SYMLINK libspdk_json.so 00:01:15.481 LIB libspdk_idxd.a 00:01:15.481 CC lib/jsonrpc/jsonrpc_server.o 00:01:15.481 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:15.481 CC lib/jsonrpc/jsonrpc_client.o 00:01:15.481 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:15.481 SO libspdk_idxd.so.12.0 00:01:15.481 SYMLINK libspdk_idxd.so 00:01:15.481 LIB libspdk_vmd.a 00:01:15.481 SO libspdk_vmd.so.6.0 00:01:15.739 SYMLINK libspdk_vmd.so 00:01:15.739 LIB libspdk_jsonrpc.a 00:01:15.739 SO libspdk_jsonrpc.so.6.0 00:01:15.739 SYMLINK libspdk_jsonrpc.so 00:01:15.998 CC lib/rpc/rpc.o 00:01:16.256 LIB libspdk_rpc.a 00:01:16.256 SO libspdk_rpc.so.6.0 00:01:16.256 SYMLINK libspdk_rpc.so 00:01:16.514 CC lib/trace/trace.o 00:01:16.514 CC lib/trace/trace_flags.o 00:01:16.514 CC lib/trace/trace_rpc.o 00:01:16.514 CC lib/notify/notify.o 00:01:16.514 CC lib/keyring/keyring.o 00:01:16.514 CC lib/notify/notify_rpc.o 00:01:16.514 CC lib/keyring/keyring_rpc.o 00:01:16.515 LIB libspdk_notify.a 00:01:16.515 SO libspdk_notify.so.6.0 00:01:16.773 LIB libspdk_trace.a 00:01:16.773 LIB libspdk_keyring.a 00:01:16.773 SYMLINK libspdk_notify.so 00:01:16.773 SO libspdk_trace.so.10.0 00:01:16.773 SO libspdk_keyring.so.1.0 00:01:16.773 SYMLINK libspdk_trace.so 00:01:16.773 SYMLINK libspdk_keyring.so 00:01:17.031 CC lib/sock/sock.o 00:01:17.031 CC lib/sock/sock_rpc.o 00:01:17.031 LIB libspdk_env_dpdk.a 00:01:17.031 CC lib/thread/thread.o 00:01:17.031 CC lib/thread/iobuf.o 00:01:17.031 SO libspdk_env_dpdk.so.14.0 00:01:17.031 SYMLINK libspdk_env_dpdk.so 00:01:17.289 LIB libspdk_sock.a 00:01:17.289 SO libspdk_sock.so.9.0 00:01:17.289 SYMLINK libspdk_sock.so 00:01:17.548 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:17.548 CC lib/nvme/nvme_ctrlr.o 00:01:17.548 CC lib/nvme/nvme_fabric.o 00:01:17.548 CC lib/nvme/nvme_ns_cmd.o 00:01:17.548 CC lib/nvme/nvme_ns.o 00:01:17.548 CC lib/nvme/nvme_pcie_common.o 00:01:17.548 CC lib/nvme/nvme_pcie.o 00:01:17.548 CC lib/nvme/nvme_qpair.o 00:01:17.548 CC lib/nvme/nvme.o 00:01:17.548 CC lib/nvme/nvme_quirks.o 00:01:17.548 CC lib/nvme/nvme_transport.o 00:01:17.548 CC lib/nvme/nvme_discovery.o 00:01:17.548 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:17.548 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:17.548 CC lib/nvme/nvme_tcp.o 00:01:17.548 CC lib/nvme/nvme_opal.o 00:01:17.548 CC lib/nvme/nvme_io_msg.o 00:01:17.548 CC lib/nvme/nvme_poll_group.o 00:01:17.548 CC lib/nvme/nvme_zns.o 00:01:17.548 CC lib/nvme/nvme_stubs.o 00:01:17.548 CC lib/nvme/nvme_auth.o 00:01:17.548 CC lib/nvme/nvme_cuse.o 00:01:17.548 CC lib/nvme/nvme_vfio_user.o 00:01:17.548 CC lib/nvme/nvme_rdma.o 00:01:18.484 LIB libspdk_thread.a 00:01:18.484 SO libspdk_thread.so.10.0 00:01:18.484 SYMLINK libspdk_thread.so 00:01:18.742 CC lib/virtio/virtio.o 00:01:18.742 CC lib/init/json_config.o 00:01:18.742 CC lib/blob/blobstore.o 00:01:18.742 CC lib/accel/accel.o 00:01:18.742 CC lib/vfu_tgt/tgt_endpoint.o 00:01:18.742 CC lib/virtio/virtio_vhost_user.o 00:01:18.742 CC lib/blob/request.o 00:01:18.742 CC lib/init/subsystem.o 00:01:18.742 CC lib/init/subsystem_rpc.o 00:01:18.742 CC lib/virtio/virtio_vfio_user.o 00:01:18.742 CC lib/accel/accel_rpc.o 00:01:18.742 CC lib/blob/zeroes.o 00:01:18.742 CC lib/init/rpc.o 00:01:18.742 CC lib/vfu_tgt/tgt_rpc.o 00:01:18.742 CC lib/virtio/virtio_pci.o 00:01:18.742 CC lib/accel/accel_sw.o 00:01:18.742 CC lib/blob/blob_bs_dev.o 00:01:18.999 LIB libspdk_init.a 00:01:19.000 SO libspdk_init.so.5.0 00:01:19.000 LIB libspdk_virtio.a 00:01:19.000 LIB libspdk_vfu_tgt.a 00:01:19.000 SYMLINK libspdk_init.so 00:01:19.000 SO libspdk_vfu_tgt.so.3.0 00:01:19.000 SO libspdk_virtio.so.7.0 00:01:19.258 SYMLINK libspdk_vfu_tgt.so 00:01:19.258 SYMLINK libspdk_virtio.so 00:01:19.258 CC lib/event/app.o 00:01:19.258 CC lib/event/reactor.o 00:01:19.258 CC lib/event/log_rpc.o 00:01:19.258 CC lib/event/app_rpc.o 00:01:19.258 CC lib/event/scheduler_static.o 00:01:19.825 LIB libspdk_event.a 00:01:19.825 SO libspdk_event.so.13.0 00:01:19.825 SYMLINK libspdk_event.so 00:01:19.825 LIB libspdk_accel.a 00:01:19.825 SO libspdk_accel.so.15.0 00:01:19.825 SYMLINK libspdk_accel.so 00:01:19.825 LIB libspdk_nvme.a 00:01:20.083 SO libspdk_nvme.so.13.0 00:01:20.083 CC lib/bdev/bdev.o 00:01:20.083 CC lib/bdev/bdev_rpc.o 00:01:20.083 CC lib/bdev/bdev_zone.o 00:01:20.083 CC lib/bdev/part.o 00:01:20.083 CC lib/bdev/scsi_nvme.o 00:01:20.342 SYMLINK libspdk_nvme.so 00:01:21.716 LIB libspdk_blob.a 00:01:21.716 SO libspdk_blob.so.11.0 00:01:21.716 SYMLINK libspdk_blob.so 00:01:21.716 CC lib/lvol/lvol.o 00:01:21.716 CC lib/blobfs/blobfs.o 00:01:21.716 CC lib/blobfs/tree.o 00:01:22.649 LIB libspdk_bdev.a 00:01:22.650 SO libspdk_bdev.so.15.0 00:01:22.650 LIB libspdk_blobfs.a 00:01:22.650 SO libspdk_blobfs.so.10.0 00:01:22.650 SYMLINK libspdk_bdev.so 00:01:22.650 SYMLINK libspdk_blobfs.so 00:01:22.650 LIB libspdk_lvol.a 00:01:22.650 SO libspdk_lvol.so.10.0 00:01:22.650 SYMLINK libspdk_lvol.so 00:01:22.913 CC lib/nbd/nbd.o 00:01:22.913 CC lib/nvmf/ctrlr.o 00:01:22.913 CC lib/ublk/ublk.o 00:01:22.913 CC lib/scsi/dev.o 00:01:22.913 CC lib/nbd/nbd_rpc.o 00:01:22.913 CC lib/ftl/ftl_core.o 00:01:22.913 CC lib/ublk/ublk_rpc.o 00:01:22.913 CC lib/nvmf/ctrlr_discovery.o 00:01:22.913 CC lib/scsi/lun.o 00:01:22.913 CC lib/ftl/ftl_init.o 00:01:22.913 CC lib/scsi/port.o 00:01:22.913 CC lib/nvmf/ctrlr_bdev.o 00:01:22.913 CC lib/ftl/ftl_layout.o 00:01:22.913 CC lib/nvmf/subsystem.o 00:01:22.913 CC lib/scsi/scsi.o 00:01:22.913 CC lib/scsi/scsi_bdev.o 00:01:22.913 CC lib/ftl/ftl_debug.o 00:01:22.913 CC lib/nvmf/nvmf.o 00:01:22.913 CC lib/ftl/ftl_io.o 00:01:22.914 CC lib/nvmf/nvmf_rpc.o 00:01:22.914 CC lib/scsi/scsi_pr.o 00:01:22.914 CC lib/ftl/ftl_sb.o 00:01:22.914 CC lib/scsi/scsi_rpc.o 00:01:22.914 CC lib/nvmf/transport.o 00:01:22.914 CC lib/nvmf/tcp.o 00:01:22.914 CC lib/scsi/task.o 00:01:22.914 CC lib/ftl/ftl_l2p.o 00:01:22.914 CC lib/nvmf/vfio_user.o 00:01:22.914 CC lib/ftl/ftl_l2p_flat.o 00:01:22.914 CC lib/ftl/ftl_nv_cache.o 00:01:22.914 CC lib/ftl/ftl_band.o 00:01:22.914 CC lib/nvmf/rdma.o 00:01:22.914 CC lib/ftl/ftl_band_ops.o 00:01:22.914 CC lib/ftl/ftl_writer.o 00:01:22.914 CC lib/ftl/ftl_rq.o 00:01:22.914 CC lib/ftl/ftl_reloc.o 00:01:22.914 CC lib/ftl/ftl_l2p_cache.o 00:01:22.914 CC lib/ftl/ftl_p2l.o 00:01:22.914 CC lib/ftl/mngt/ftl_mngt.o 00:01:22.914 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:22.914 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:22.914 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:22.914 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:22.914 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:22.914 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:22.914 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:22.914 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:22.914 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:23.177 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:23.177 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:23.177 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:23.177 CC lib/ftl/utils/ftl_conf.o 00:01:23.177 CC lib/ftl/utils/ftl_md.o 00:01:23.177 CC lib/ftl/utils/ftl_mempool.o 00:01:23.177 CC lib/ftl/utils/ftl_bitmap.o 00:01:23.177 CC lib/ftl/utils/ftl_property.o 00:01:23.177 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:23.177 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:23.177 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:23.177 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:23.177 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:23.177 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:23.177 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:23.436 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:23.436 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:23.436 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:23.436 CC lib/ftl/base/ftl_base_dev.o 00:01:23.436 CC lib/ftl/base/ftl_base_bdev.o 00:01:23.436 CC lib/ftl/ftl_trace.o 00:01:23.696 LIB libspdk_nbd.a 00:01:23.696 SO libspdk_nbd.so.7.0 00:01:23.696 SYMLINK libspdk_nbd.so 00:01:23.696 LIB libspdk_scsi.a 00:01:23.696 SO libspdk_scsi.so.9.0 00:01:23.955 SYMLINK libspdk_scsi.so 00:01:23.955 LIB libspdk_ublk.a 00:01:23.955 SO libspdk_ublk.so.3.0 00:01:23.955 SYMLINK libspdk_ublk.so 00:01:23.955 CC lib/vhost/vhost.o 00:01:23.955 CC lib/iscsi/conn.o 00:01:23.955 CC lib/vhost/vhost_rpc.o 00:01:23.955 CC lib/iscsi/init_grp.o 00:01:23.955 CC lib/vhost/vhost_scsi.o 00:01:23.955 CC lib/iscsi/iscsi.o 00:01:23.955 CC lib/vhost/vhost_blk.o 00:01:23.955 CC lib/iscsi/md5.o 00:01:23.955 CC lib/vhost/rte_vhost_user.o 00:01:23.955 CC lib/iscsi/param.o 00:01:23.955 CC lib/iscsi/portal_grp.o 00:01:23.955 CC lib/iscsi/tgt_node.o 00:01:23.955 CC lib/iscsi/iscsi_subsystem.o 00:01:23.955 CC lib/iscsi/iscsi_rpc.o 00:01:23.955 CC lib/iscsi/task.o 00:01:24.213 LIB libspdk_ftl.a 00:01:24.213 SO libspdk_ftl.so.9.0 00:01:24.778 SYMLINK libspdk_ftl.so 00:01:25.345 LIB libspdk_vhost.a 00:01:25.345 SO libspdk_vhost.so.8.0 00:01:25.345 LIB libspdk_nvmf.a 00:01:25.345 SYMLINK libspdk_vhost.so 00:01:25.345 SO libspdk_nvmf.so.18.0 00:01:25.345 LIB libspdk_iscsi.a 00:01:25.603 SO libspdk_iscsi.so.8.0 00:01:25.603 SYMLINK libspdk_nvmf.so 00:01:25.603 SYMLINK libspdk_iscsi.so 00:01:25.861 CC module/env_dpdk/env_dpdk_rpc.o 00:01:25.861 CC module/vfu_device/vfu_virtio.o 00:01:25.861 CC module/vfu_device/vfu_virtio_blk.o 00:01:25.861 CC module/vfu_device/vfu_virtio_scsi.o 00:01:25.861 CC module/vfu_device/vfu_virtio_rpc.o 00:01:26.121 CC module/accel/error/accel_error.o 00:01:26.121 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:26.121 CC module/accel/error/accel_error_rpc.o 00:01:26.121 CC module/accel/ioat/accel_ioat.o 00:01:26.121 CC module/accel/dsa/accel_dsa.o 00:01:26.121 CC module/accel/ioat/accel_ioat_rpc.o 00:01:26.121 CC module/sock/posix/posix.o 00:01:26.121 CC module/scheduler/gscheduler/gscheduler.o 00:01:26.121 CC module/accel/iaa/accel_iaa.o 00:01:26.121 CC module/accel/dsa/accel_dsa_rpc.o 00:01:26.121 CC module/accel/iaa/accel_iaa_rpc.o 00:01:26.121 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:26.121 CC module/blob/bdev/blob_bdev.o 00:01:26.121 CC module/keyring/file/keyring.o 00:01:26.121 CC module/keyring/file/keyring_rpc.o 00:01:26.121 LIB libspdk_env_dpdk_rpc.a 00:01:26.121 SO libspdk_env_dpdk_rpc.so.6.0 00:01:26.121 SYMLINK libspdk_env_dpdk_rpc.so 00:01:26.121 LIB libspdk_keyring_file.a 00:01:26.121 LIB libspdk_scheduler_gscheduler.a 00:01:26.121 LIB libspdk_scheduler_dpdk_governor.a 00:01:26.121 SO libspdk_scheduler_gscheduler.so.4.0 00:01:26.121 SO libspdk_keyring_file.so.1.0 00:01:26.121 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:26.121 LIB libspdk_accel_error.a 00:01:26.121 LIB libspdk_accel_ioat.a 00:01:26.121 LIB libspdk_scheduler_dynamic.a 00:01:26.121 LIB libspdk_accel_iaa.a 00:01:26.121 SO libspdk_accel_error.so.2.0 00:01:26.121 SO libspdk_scheduler_dynamic.so.4.0 00:01:26.121 SO libspdk_accel_ioat.so.6.0 00:01:26.379 SYMLINK libspdk_scheduler_gscheduler.so 00:01:26.379 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:26.379 SYMLINK libspdk_keyring_file.so 00:01:26.379 SO libspdk_accel_iaa.so.3.0 00:01:26.379 LIB libspdk_accel_dsa.a 00:01:26.379 SYMLINK libspdk_scheduler_dynamic.so 00:01:26.379 LIB libspdk_blob_bdev.a 00:01:26.379 SYMLINK libspdk_accel_error.so 00:01:26.379 SYMLINK libspdk_accel_ioat.so 00:01:26.379 SO libspdk_accel_dsa.so.5.0 00:01:26.379 SO libspdk_blob_bdev.so.11.0 00:01:26.379 SYMLINK libspdk_accel_iaa.so 00:01:26.379 SYMLINK libspdk_accel_dsa.so 00:01:26.379 SYMLINK libspdk_blob_bdev.so 00:01:26.639 LIB libspdk_vfu_device.a 00:01:26.639 CC module/bdev/delay/vbdev_delay.o 00:01:26.639 CC module/bdev/malloc/bdev_malloc.o 00:01:26.639 CC module/bdev/passthru/vbdev_passthru.o 00:01:26.639 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:26.640 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:26.640 CC module/bdev/ftl/bdev_ftl.o 00:01:26.640 CC module/bdev/lvol/vbdev_lvol.o 00:01:26.640 CC module/bdev/null/bdev_null.o 00:01:26.640 CC module/bdev/nvme/bdev_nvme.o 00:01:26.640 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:26.640 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:26.640 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:26.640 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:26.640 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:26.640 CC module/bdev/null/bdev_null_rpc.o 00:01:26.640 CC module/blobfs/bdev/blobfs_bdev.o 00:01:26.640 CC module/bdev/gpt/gpt.o 00:01:26.640 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:26.640 CC module/bdev/iscsi/bdev_iscsi.o 00:01:26.640 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:26.640 CC module/bdev/error/vbdev_error.o 00:01:26.640 CC module/bdev/gpt/vbdev_gpt.o 00:01:26.640 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:26.640 CC module/bdev/raid/bdev_raid.o 00:01:26.640 CC module/bdev/nvme/nvme_rpc.o 00:01:26.640 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:26.640 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:26.640 CC module/bdev/aio/bdev_aio.o 00:01:26.640 CC module/bdev/split/vbdev_split.o 00:01:26.640 CC module/bdev/raid/bdev_raid_rpc.o 00:01:26.640 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:26.640 CC module/bdev/aio/bdev_aio_rpc.o 00:01:26.640 CC module/bdev/raid/bdev_raid_sb.o 00:01:26.640 CC module/bdev/nvme/bdev_mdns_client.o 00:01:26.640 CC module/bdev/split/vbdev_split_rpc.o 00:01:26.640 CC module/bdev/error/vbdev_error_rpc.o 00:01:26.640 CC module/bdev/nvme/vbdev_opal.o 00:01:26.640 CC module/bdev/raid/raid0.o 00:01:26.640 CC module/bdev/raid/raid1.o 00:01:26.640 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:26.640 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:26.640 CC module/bdev/raid/concat.o 00:01:26.640 SO libspdk_vfu_device.so.3.0 00:01:26.898 SYMLINK libspdk_vfu_device.so 00:01:26.898 LIB libspdk_sock_posix.a 00:01:26.899 SO libspdk_sock_posix.so.6.0 00:01:26.899 SYMLINK libspdk_sock_posix.so 00:01:26.899 LIB libspdk_blobfs_bdev.a 00:01:27.157 SO libspdk_blobfs_bdev.so.6.0 00:01:27.157 LIB libspdk_bdev_split.a 00:01:27.157 LIB libspdk_bdev_gpt.a 00:01:27.157 SO libspdk_bdev_split.so.6.0 00:01:27.157 SO libspdk_bdev_gpt.so.6.0 00:01:27.157 SYMLINK libspdk_blobfs_bdev.so 00:01:27.157 LIB libspdk_bdev_ftl.a 00:01:27.157 LIB libspdk_bdev_null.a 00:01:27.157 LIB libspdk_bdev_error.a 00:01:27.157 SO libspdk_bdev_null.so.6.0 00:01:27.157 SO libspdk_bdev_ftl.so.6.0 00:01:27.157 SYMLINK libspdk_bdev_split.so 00:01:27.157 LIB libspdk_bdev_passthru.a 00:01:27.157 SYMLINK libspdk_bdev_gpt.so 00:01:27.157 SO libspdk_bdev_error.so.6.0 00:01:27.157 SO libspdk_bdev_passthru.so.6.0 00:01:27.157 LIB libspdk_bdev_aio.a 00:01:27.157 LIB libspdk_bdev_lvol.a 00:01:27.157 LIB libspdk_bdev_malloc.a 00:01:27.157 SYMLINK libspdk_bdev_null.so 00:01:27.157 SYMLINK libspdk_bdev_ftl.so 00:01:27.157 SO libspdk_bdev_malloc.so.6.0 00:01:27.157 SO libspdk_bdev_aio.so.6.0 00:01:27.157 SO libspdk_bdev_lvol.so.6.0 00:01:27.157 SYMLINK libspdk_bdev_error.so 00:01:27.157 LIB libspdk_bdev_zone_block.a 00:01:27.157 SYMLINK libspdk_bdev_passthru.so 00:01:27.157 LIB libspdk_bdev_iscsi.a 00:01:27.157 SO libspdk_bdev_zone_block.so.6.0 00:01:27.157 LIB libspdk_bdev_delay.a 00:01:27.157 SYMLINK libspdk_bdev_malloc.so 00:01:27.157 SYMLINK libspdk_bdev_aio.so 00:01:27.157 SYMLINK libspdk_bdev_lvol.so 00:01:27.157 SO libspdk_bdev_iscsi.so.6.0 00:01:27.416 SO libspdk_bdev_delay.so.6.0 00:01:27.416 SYMLINK libspdk_bdev_zone_block.so 00:01:27.416 SYMLINK libspdk_bdev_iscsi.so 00:01:27.416 SYMLINK libspdk_bdev_delay.so 00:01:27.416 LIB libspdk_bdev_virtio.a 00:01:27.416 SO libspdk_bdev_virtio.so.6.0 00:01:27.416 SYMLINK libspdk_bdev_virtio.so 00:01:27.674 LIB libspdk_bdev_raid.a 00:01:27.674 SO libspdk_bdev_raid.so.6.0 00:01:27.932 SYMLINK libspdk_bdev_raid.so 00:01:29.307 LIB libspdk_bdev_nvme.a 00:01:29.307 SO libspdk_bdev_nvme.so.7.0 00:01:29.307 SYMLINK libspdk_bdev_nvme.so 00:01:29.565 CC module/event/subsystems/iobuf/iobuf.o 00:01:29.565 CC module/event/subsystems/vmd/vmd.o 00:01:29.565 CC module/event/subsystems/scheduler/scheduler.o 00:01:29.565 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:29.565 CC module/event/subsystems/sock/sock.o 00:01:29.565 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:29.565 CC module/event/subsystems/keyring/keyring.o 00:01:29.565 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:29.565 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:29.565 LIB libspdk_event_sock.a 00:01:29.565 LIB libspdk_event_keyring.a 00:01:29.565 LIB libspdk_event_vfu_tgt.a 00:01:29.565 LIB libspdk_event_vhost_blk.a 00:01:29.565 LIB libspdk_event_vmd.a 00:01:29.565 LIB libspdk_event_scheduler.a 00:01:29.565 SO libspdk_event_sock.so.5.0 00:01:29.565 SO libspdk_event_keyring.so.1.0 00:01:29.565 LIB libspdk_event_iobuf.a 00:01:29.565 SO libspdk_event_vhost_blk.so.3.0 00:01:29.565 SO libspdk_event_vfu_tgt.so.3.0 00:01:29.565 SO libspdk_event_scheduler.so.4.0 00:01:29.565 SO libspdk_event_vmd.so.6.0 00:01:29.825 SO libspdk_event_iobuf.so.3.0 00:01:29.825 SYMLINK libspdk_event_sock.so 00:01:29.825 SYMLINK libspdk_event_keyring.so 00:01:29.825 SYMLINK libspdk_event_vhost_blk.so 00:01:29.825 SYMLINK libspdk_event_vfu_tgt.so 00:01:29.825 SYMLINK libspdk_event_scheduler.so 00:01:29.825 SYMLINK libspdk_event_vmd.so 00:01:29.825 SYMLINK libspdk_event_iobuf.so 00:01:29.825 CC module/event/subsystems/accel/accel.o 00:01:30.083 LIB libspdk_event_accel.a 00:01:30.084 SO libspdk_event_accel.so.6.0 00:01:30.084 SYMLINK libspdk_event_accel.so 00:01:30.341 CC module/event/subsystems/bdev/bdev.o 00:01:30.600 LIB libspdk_event_bdev.a 00:01:30.600 SO libspdk_event_bdev.so.6.0 00:01:30.600 SYMLINK libspdk_event_bdev.so 00:01:30.858 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:30.858 CC module/event/subsystems/scsi/scsi.o 00:01:30.858 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:30.858 CC module/event/subsystems/nbd/nbd.o 00:01:30.858 CC module/event/subsystems/ublk/ublk.o 00:01:30.858 LIB libspdk_event_ublk.a 00:01:30.858 LIB libspdk_event_nbd.a 00:01:30.858 LIB libspdk_event_scsi.a 00:01:30.858 SO libspdk_event_nbd.so.6.0 00:01:30.858 SO libspdk_event_ublk.so.3.0 00:01:30.858 SO libspdk_event_scsi.so.6.0 00:01:30.858 SYMLINK libspdk_event_nbd.so 00:01:30.858 SYMLINK libspdk_event_ublk.so 00:01:30.858 SYMLINK libspdk_event_scsi.so 00:01:31.115 LIB libspdk_event_nvmf.a 00:01:31.115 SO libspdk_event_nvmf.so.6.0 00:01:31.115 SYMLINK libspdk_event_nvmf.so 00:01:31.115 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:31.115 CC module/event/subsystems/iscsi/iscsi.o 00:01:31.374 LIB libspdk_event_vhost_scsi.a 00:01:31.374 SO libspdk_event_vhost_scsi.so.3.0 00:01:31.374 LIB libspdk_event_iscsi.a 00:01:31.374 SYMLINK libspdk_event_vhost_scsi.so 00:01:31.374 SO libspdk_event_iscsi.so.6.0 00:01:31.374 SYMLINK libspdk_event_iscsi.so 00:01:31.642 SO libspdk.so.6.0 00:01:31.642 SYMLINK libspdk.so 00:01:31.642 CXX app/trace/trace.o 00:01:31.642 CC app/spdk_nvme_perf/perf.o 00:01:31.642 CC app/trace_record/trace_record.o 00:01:31.642 TEST_HEADER include/spdk/accel.h 00:01:31.642 CC app/spdk_nvme_identify/identify.o 00:01:31.642 CC app/spdk_lspci/spdk_lspci.o 00:01:31.642 CC app/spdk_top/spdk_top.o 00:01:31.642 TEST_HEADER include/spdk/accel_module.h 00:01:31.642 CC test/rpc_client/rpc_client_test.o 00:01:31.642 CC app/spdk_nvme_discover/discovery_aer.o 00:01:31.642 TEST_HEADER include/spdk/assert.h 00:01:31.642 TEST_HEADER include/spdk/barrier.h 00:01:31.642 TEST_HEADER include/spdk/base64.h 00:01:31.642 TEST_HEADER include/spdk/bdev.h 00:01:31.642 TEST_HEADER include/spdk/bdev_module.h 00:01:31.917 TEST_HEADER include/spdk/bdev_zone.h 00:01:31.917 TEST_HEADER include/spdk/bit_array.h 00:01:31.917 TEST_HEADER include/spdk/bit_pool.h 00:01:31.917 TEST_HEADER include/spdk/blob_bdev.h 00:01:31.917 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:31.917 TEST_HEADER include/spdk/blobfs.h 00:01:31.917 TEST_HEADER include/spdk/blob.h 00:01:31.917 TEST_HEADER include/spdk/conf.h 00:01:31.917 TEST_HEADER include/spdk/config.h 00:01:31.917 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:31.917 TEST_HEADER include/spdk/cpuset.h 00:01:31.917 TEST_HEADER include/spdk/crc16.h 00:01:31.917 TEST_HEADER include/spdk/crc32.h 00:01:31.917 CC app/spdk_dd/spdk_dd.o 00:01:31.917 TEST_HEADER include/spdk/crc64.h 00:01:31.917 TEST_HEADER include/spdk/dif.h 00:01:31.917 CC app/nvmf_tgt/nvmf_main.o 00:01:31.917 TEST_HEADER include/spdk/dma.h 00:01:31.917 CC app/iscsi_tgt/iscsi_tgt.o 00:01:31.917 TEST_HEADER include/spdk/endian.h 00:01:31.917 TEST_HEADER include/spdk/env_dpdk.h 00:01:31.917 CC app/vhost/vhost.o 00:01:31.917 TEST_HEADER include/spdk/env.h 00:01:31.917 TEST_HEADER include/spdk/event.h 00:01:31.917 TEST_HEADER include/spdk/fd_group.h 00:01:31.917 TEST_HEADER include/spdk/fd.h 00:01:31.917 TEST_HEADER include/spdk/file.h 00:01:31.917 TEST_HEADER include/spdk/ftl.h 00:01:31.917 TEST_HEADER include/spdk/gpt_spec.h 00:01:31.917 TEST_HEADER include/spdk/hexlify.h 00:01:31.917 TEST_HEADER include/spdk/histogram_data.h 00:01:31.917 TEST_HEADER include/spdk/idxd.h 00:01:31.917 TEST_HEADER include/spdk/idxd_spec.h 00:01:31.917 CC app/spdk_tgt/spdk_tgt.o 00:01:31.917 TEST_HEADER include/spdk/init.h 00:01:31.917 CC test/event/reactor/reactor.o 00:01:31.917 CC test/event/event_perf/event_perf.o 00:01:31.917 CC test/app/histogram_perf/histogram_perf.o 00:01:31.917 CC examples/ioat/verify/verify.o 00:01:31.917 TEST_HEADER include/spdk/ioat.h 00:01:31.917 CC test/app/jsoncat/jsoncat.o 00:01:31.917 CC app/fio/nvme/fio_plugin.o 00:01:31.917 TEST_HEADER include/spdk/ioat_spec.h 00:01:31.917 CC examples/util/zipf/zipf.o 00:01:31.917 TEST_HEADER include/spdk/iscsi_spec.h 00:01:31.917 CC examples/ioat/perf/perf.o 00:01:31.917 TEST_HEADER include/spdk/json.h 00:01:31.917 CC test/event/reactor_perf/reactor_perf.o 00:01:31.917 TEST_HEADER include/spdk/jsonrpc.h 00:01:31.917 CC test/nvme/aer/aer.o 00:01:31.917 CC examples/accel/perf/accel_perf.o 00:01:31.917 TEST_HEADER include/spdk/keyring.h 00:01:31.917 CC test/app/stub/stub.o 00:01:31.917 CC test/thread/poller_perf/poller_perf.o 00:01:31.917 TEST_HEADER include/spdk/keyring_module.h 00:01:31.917 CC examples/vmd/lsvmd/lsvmd.o 00:01:31.917 CC examples/idxd/perf/perf.o 00:01:31.917 TEST_HEADER include/spdk/likely.h 00:01:31.917 CC examples/nvme/hello_world/hello_world.o 00:01:31.917 CC examples/sock/hello_world/hello_sock.o 00:01:31.917 TEST_HEADER include/spdk/log.h 00:01:31.917 TEST_HEADER include/spdk/lvol.h 00:01:31.917 TEST_HEADER include/spdk/memory.h 00:01:31.917 TEST_HEADER include/spdk/mmio.h 00:01:31.917 TEST_HEADER include/spdk/nbd.h 00:01:31.917 CC test/event/app_repeat/app_repeat.o 00:01:31.917 TEST_HEADER include/spdk/notify.h 00:01:31.917 TEST_HEADER include/spdk/nvme.h 00:01:31.917 TEST_HEADER include/spdk/nvme_intel.h 00:01:31.917 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:31.917 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:31.917 TEST_HEADER include/spdk/nvme_spec.h 00:01:31.917 CC examples/bdev/hello_world/hello_bdev.o 00:01:31.917 TEST_HEADER include/spdk/nvme_zns.h 00:01:31.917 CC examples/bdev/bdevperf/bdevperf.o 00:01:31.917 CC test/blobfs/mkfs/mkfs.o 00:01:31.917 CC test/app/bdev_svc/bdev_svc.o 00:01:31.917 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:31.917 CC app/fio/bdev/fio_plugin.o 00:01:31.917 CC test/bdev/bdevio/bdevio.o 00:01:31.917 CC test/dma/test_dma/test_dma.o 00:01:31.917 CC test/accel/dif/dif.o 00:01:31.917 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:31.917 CC examples/nvmf/nvmf/nvmf.o 00:01:31.917 TEST_HEADER include/spdk/nvmf.h 00:01:31.917 CC test/event/scheduler/scheduler.o 00:01:31.917 CC examples/blob/hello_world/hello_blob.o 00:01:31.917 TEST_HEADER include/spdk/nvmf_spec.h 00:01:31.917 CC examples/thread/thread/thread_ex.o 00:01:31.917 TEST_HEADER include/spdk/nvmf_transport.h 00:01:31.917 TEST_HEADER include/spdk/opal.h 00:01:31.917 TEST_HEADER include/spdk/opal_spec.h 00:01:31.917 TEST_HEADER include/spdk/pci_ids.h 00:01:31.917 TEST_HEADER include/spdk/pipe.h 00:01:31.917 TEST_HEADER include/spdk/queue.h 00:01:31.917 TEST_HEADER include/spdk/reduce.h 00:01:31.917 TEST_HEADER include/spdk/rpc.h 00:01:31.917 TEST_HEADER include/spdk/scheduler.h 00:01:31.917 TEST_HEADER include/spdk/scsi.h 00:01:31.917 TEST_HEADER include/spdk/scsi_spec.h 00:01:31.917 TEST_HEADER include/spdk/sock.h 00:01:32.185 TEST_HEADER include/spdk/stdinc.h 00:01:32.185 TEST_HEADER include/spdk/string.h 00:01:32.185 TEST_HEADER include/spdk/thread.h 00:01:32.185 TEST_HEADER include/spdk/trace.h 00:01:32.185 LINK spdk_lspci 00:01:32.185 TEST_HEADER include/spdk/trace_parser.h 00:01:32.185 TEST_HEADER include/spdk/tree.h 00:01:32.185 CC test/env/mem_callbacks/mem_callbacks.o 00:01:32.185 TEST_HEADER include/spdk/ublk.h 00:01:32.185 TEST_HEADER include/spdk/util.h 00:01:32.185 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:32.185 TEST_HEADER include/spdk/uuid.h 00:01:32.185 TEST_HEADER include/spdk/version.h 00:01:32.185 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:32.185 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:32.185 TEST_HEADER include/spdk/vhost.h 00:01:32.185 TEST_HEADER include/spdk/vmd.h 00:01:32.185 CC test/lvol/esnap/esnap.o 00:01:32.185 TEST_HEADER include/spdk/xor.h 00:01:32.185 TEST_HEADER include/spdk/zipf.h 00:01:32.185 CXX test/cpp_headers/accel.o 00:01:32.185 LINK rpc_client_test 00:01:32.185 LINK spdk_nvme_discover 00:01:32.185 LINK jsoncat 00:01:32.185 LINK reactor 00:01:32.185 LINK histogram_perf 00:01:32.185 LINK lsvmd 00:01:32.185 LINK reactor_perf 00:01:32.185 LINK interrupt_tgt 00:01:32.185 LINK event_perf 00:01:32.185 LINK poller_perf 00:01:32.185 LINK zipf 00:01:32.185 LINK nvmf_tgt 00:01:32.185 LINK spdk_trace_record 00:01:32.185 LINK vhost 00:01:32.185 LINK app_repeat 00:01:32.185 LINK iscsi_tgt 00:01:32.185 LINK stub 00:01:32.448 LINK spdk_tgt 00:01:32.448 LINK ioat_perf 00:01:32.448 LINK verify 00:01:32.448 LINK hello_world 00:01:32.448 LINK bdev_svc 00:01:32.448 LINK mkfs 00:01:32.448 LINK hello_sock 00:01:32.448 LINK hello_bdev 00:01:32.448 CXX test/cpp_headers/accel_module.o 00:01:32.448 LINK scheduler 00:01:32.448 LINK hello_blob 00:01:32.448 LINK aer 00:01:32.448 CXX test/cpp_headers/assert.o 00:01:32.448 LINK thread 00:01:32.448 CXX test/cpp_headers/barrier.o 00:01:32.448 LINK spdk_dd 00:01:32.448 CC test/env/vtophys/vtophys.o 00:01:32.448 CXX test/cpp_headers/base64.o 00:01:32.713 LINK idxd_perf 00:01:32.713 LINK nvmf 00:01:32.713 LINK spdk_trace 00:01:32.713 CXX test/cpp_headers/bdev.o 00:01:32.713 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:32.713 CC examples/vmd/led/led.o 00:01:32.713 CC test/env/memory/memory_ut.o 00:01:32.713 CC test/nvme/reset/reset.o 00:01:32.713 CXX test/cpp_headers/bdev_module.o 00:01:32.713 CC examples/nvme/reconnect/reconnect.o 00:01:32.713 CC test/env/pci/pci_ut.o 00:01:32.713 LINK dif 00:01:32.713 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:32.713 LINK bdevio 00:01:32.713 CC examples/blob/cli/blobcli.o 00:01:32.713 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:32.713 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:32.713 LINK test_dma 00:01:32.713 CC examples/nvme/arbitration/arbitration.o 00:01:32.713 CC test/nvme/sgl/sgl.o 00:01:32.713 LINK accel_perf 00:01:32.713 CC test/nvme/e2edp/nvme_dp.o 00:01:32.975 CC examples/nvme/hotplug/hotplug.o 00:01:32.975 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:32.975 LINK vtophys 00:01:32.975 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:32.975 CXX test/cpp_headers/bdev_zone.o 00:01:32.975 CC test/nvme/overhead/overhead.o 00:01:32.975 CC test/nvme/err_injection/err_injection.o 00:01:32.975 LINK nvme_fuzz 00:01:32.975 LINK spdk_bdev 00:01:32.975 CXX test/cpp_headers/bit_array.o 00:01:32.975 LINK spdk_nvme 00:01:32.975 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:32.975 CC examples/nvme/abort/abort.o 00:01:32.975 CXX test/cpp_headers/bit_pool.o 00:01:32.975 LINK led 00:01:32.975 CC test/nvme/startup/startup.o 00:01:32.975 CC test/nvme/reserve/reserve.o 00:01:32.975 LINK env_dpdk_post_init 00:01:32.975 CXX test/cpp_headers/blob_bdev.o 00:01:32.975 CC test/nvme/simple_copy/simple_copy.o 00:01:32.975 CC test/nvme/connect_stress/connect_stress.o 00:01:32.975 CXX test/cpp_headers/blobfs_bdev.o 00:01:32.975 CC test/nvme/boot_partition/boot_partition.o 00:01:32.975 CC test/nvme/compliance/nvme_compliance.o 00:01:32.975 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:32.975 CC test/nvme/fused_ordering/fused_ordering.o 00:01:33.238 CXX test/cpp_headers/blobfs.o 00:01:33.238 CXX test/cpp_headers/blob.o 00:01:33.238 CXX test/cpp_headers/conf.o 00:01:33.238 CXX test/cpp_headers/config.o 00:01:33.238 CXX test/cpp_headers/cpuset.o 00:01:33.238 CXX test/cpp_headers/crc16.o 00:01:33.238 CXX test/cpp_headers/crc32.o 00:01:33.238 LINK reset 00:01:33.238 CXX test/cpp_headers/crc64.o 00:01:33.238 CC test/nvme/fdp/fdp.o 00:01:33.238 LINK cmb_copy 00:01:33.238 CXX test/cpp_headers/dif.o 00:01:33.238 CXX test/cpp_headers/dma.o 00:01:33.238 CXX test/cpp_headers/endian.o 00:01:33.238 CXX test/cpp_headers/env_dpdk.o 00:01:33.238 LINK err_injection 00:01:33.238 CC test/nvme/cuse/cuse.o 00:01:33.238 CXX test/cpp_headers/env.o 00:01:33.238 LINK startup 00:01:33.238 CXX test/cpp_headers/event.o 00:01:33.238 LINK mem_callbacks 00:01:33.238 LINK sgl 00:01:33.238 LINK spdk_nvme_identify 00:01:33.238 LINK pmr_persistence 00:01:33.238 LINK hotplug 00:01:33.238 LINK nvme_dp 00:01:33.238 LINK spdk_top 00:01:33.504 LINK spdk_nvme_perf 00:01:33.504 LINK reconnect 00:01:33.504 LINK reserve 00:01:33.504 LINK pci_ut 00:01:33.504 LINK boot_partition 00:01:33.504 LINK bdevperf 00:01:33.504 LINK arbitration 00:01:33.504 CXX test/cpp_headers/fd_group.o 00:01:33.504 LINK connect_stress 00:01:33.504 LINK simple_copy 00:01:33.504 CXX test/cpp_headers/fd.o 00:01:33.504 LINK overhead 00:01:33.504 LINK doorbell_aers 00:01:33.504 CXX test/cpp_headers/file.o 00:01:33.504 LINK fused_ordering 00:01:33.504 CXX test/cpp_headers/ftl.o 00:01:33.504 CXX test/cpp_headers/gpt_spec.o 00:01:33.504 CXX test/cpp_headers/hexlify.o 00:01:33.504 CXX test/cpp_headers/histogram_data.o 00:01:33.504 CXX test/cpp_headers/idxd.o 00:01:33.504 CXX test/cpp_headers/idxd_spec.o 00:01:33.504 CXX test/cpp_headers/init.o 00:01:33.504 CXX test/cpp_headers/ioat.o 00:01:33.504 CXX test/cpp_headers/ioat_spec.o 00:01:33.504 CXX test/cpp_headers/iscsi_spec.o 00:01:33.504 LINK nvme_manage 00:01:33.773 CXX test/cpp_headers/json.o 00:01:33.773 CXX test/cpp_headers/jsonrpc.o 00:01:33.773 CXX test/cpp_headers/keyring.o 00:01:33.773 CXX test/cpp_headers/keyring_module.o 00:01:33.773 LINK vhost_fuzz 00:01:33.773 CXX test/cpp_headers/likely.o 00:01:33.773 CXX test/cpp_headers/log.o 00:01:33.773 CXX test/cpp_headers/lvol.o 00:01:33.773 CXX test/cpp_headers/memory.o 00:01:33.773 CXX test/cpp_headers/mmio.o 00:01:33.773 CXX test/cpp_headers/nbd.o 00:01:33.773 CXX test/cpp_headers/notify.o 00:01:33.773 LINK abort 00:01:33.773 CXX test/cpp_headers/nvme.o 00:01:33.773 LINK blobcli 00:01:33.773 LINK nvme_compliance 00:01:33.773 CXX test/cpp_headers/nvme_intel.o 00:01:33.773 CXX test/cpp_headers/nvme_ocssd.o 00:01:33.773 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:33.773 CXX test/cpp_headers/nvme_spec.o 00:01:33.773 CXX test/cpp_headers/nvme_zns.o 00:01:33.773 CXX test/cpp_headers/nvmf_cmd.o 00:01:33.773 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:33.773 CXX test/cpp_headers/nvmf.o 00:01:33.773 CXX test/cpp_headers/nvmf_spec.o 00:01:33.774 CXX test/cpp_headers/nvmf_transport.o 00:01:33.774 CXX test/cpp_headers/opal.o 00:01:33.774 LINK fdp 00:01:33.774 CXX test/cpp_headers/opal_spec.o 00:01:33.774 CXX test/cpp_headers/pci_ids.o 00:01:33.774 CXX test/cpp_headers/pipe.o 00:01:33.774 CXX test/cpp_headers/queue.o 00:01:33.774 CXX test/cpp_headers/reduce.o 00:01:33.774 CXX test/cpp_headers/rpc.o 00:01:33.774 CXX test/cpp_headers/scheduler.o 00:01:34.033 CXX test/cpp_headers/scsi.o 00:01:34.033 CXX test/cpp_headers/scsi_spec.o 00:01:34.033 CXX test/cpp_headers/sock.o 00:01:34.033 CXX test/cpp_headers/stdinc.o 00:01:34.033 CXX test/cpp_headers/string.o 00:01:34.033 CXX test/cpp_headers/thread.o 00:01:34.033 CXX test/cpp_headers/trace.o 00:01:34.033 CXX test/cpp_headers/trace_parser.o 00:01:34.033 CXX test/cpp_headers/tree.o 00:01:34.033 CXX test/cpp_headers/ublk.o 00:01:34.033 CXX test/cpp_headers/util.o 00:01:34.033 CXX test/cpp_headers/uuid.o 00:01:34.033 CXX test/cpp_headers/version.o 00:01:34.033 CXX test/cpp_headers/vfio_user_pci.o 00:01:34.033 CXX test/cpp_headers/vfio_user_spec.o 00:01:34.033 CXX test/cpp_headers/vhost.o 00:01:34.033 CXX test/cpp_headers/vmd.o 00:01:34.033 CXX test/cpp_headers/xor.o 00:01:34.033 CXX test/cpp_headers/zipf.o 00:01:34.291 LINK memory_ut 00:01:34.858 LINK cuse 00:01:35.118 LINK iscsi_fuzz 00:01:38.411 LINK esnap 00:01:38.411 00:01:38.411 real 0m48.099s 00:01:38.411 user 10m0.571s 00:01:38.411 sys 2m26.479s 00:01:38.411 21:16:04 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:38.411 21:16:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.411 ************************************ 00:01:38.411 END TEST make 00:01:38.411 ************************************ 00:01:38.411 21:16:04 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:38.411 21:16:04 -- pm/common@30 -- $ signal_monitor_resources TERM 00:01:38.411 21:16:04 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:01:38.411 21:16:04 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.411 21:16:04 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:38.411 21:16:04 -- pm/common@45 -- $ pid=2395201 00:01:38.411 21:16:04 -- pm/common@52 -- $ sudo kill -TERM 2395201 00:01:38.411 21:16:04 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.411 21:16:04 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:38.411 21:16:04 -- pm/common@45 -- $ pid=2395203 00:01:38.411 21:16:04 -- pm/common@52 -- $ sudo kill -TERM 2395203 00:01:38.411 21:16:04 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.411 21:16:04 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:38.411 21:16:04 -- pm/common@45 -- $ pid=2395202 00:01:38.412 21:16:04 -- pm/common@52 -- $ sudo kill -TERM 2395202 00:01:38.670 21:16:04 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.670 21:16:04 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:38.670 21:16:04 -- pm/common@45 -- $ pid=2395204 00:01:38.670 21:16:04 -- pm/common@52 -- $ sudo kill -TERM 2395204 00:01:38.670 21:16:04 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:38.670 21:16:04 -- nvmf/common.sh@7 -- # uname -s 00:01:38.670 21:16:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:38.670 21:16:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:38.670 21:16:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:38.670 21:16:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:38.670 21:16:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:38.670 21:16:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:38.670 21:16:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:38.670 21:16:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:38.670 21:16:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:38.670 21:16:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:38.670 21:16:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:01:38.670 21:16:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:01:38.670 21:16:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:38.670 21:16:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:38.670 21:16:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:38.670 21:16:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:38.670 21:16:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:38.670 21:16:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:38.670 21:16:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:38.670 21:16:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:38.670 21:16:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.670 21:16:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.670 21:16:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.670 21:16:04 -- paths/export.sh@5 -- # export PATH 00:01:38.670 21:16:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.670 21:16:04 -- nvmf/common.sh@47 -- # : 0 00:01:38.670 21:16:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:38.670 21:16:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:38.670 21:16:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:38.670 21:16:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:38.670 21:16:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:38.670 21:16:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:38.670 21:16:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:38.670 21:16:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:38.670 21:16:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:38.670 21:16:04 -- spdk/autotest.sh@32 -- # uname -s 00:01:38.670 21:16:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:38.670 21:16:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:38.670 21:16:04 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:38.670 21:16:04 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:38.670 21:16:04 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:38.670 21:16:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:38.670 21:16:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:38.670 21:16:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:38.670 21:16:04 -- spdk/autotest.sh@48 -- # udevadm_pid=2450559 00:01:38.670 21:16:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:38.670 21:16:04 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:38.670 21:16:04 -- pm/common@17 -- # local monitor 00:01:38.670 21:16:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.670 21:16:04 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2450561 00:01:38.670 21:16:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.670 21:16:04 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2450563 00:01:38.670 21:16:04 -- pm/common@21 -- # date +%s 00:01:38.671 21:16:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.671 21:16:04 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2450566 00:01:38.671 21:16:04 -- pm/common@21 -- # date +%s 00:01:38.671 21:16:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.671 21:16:04 -- pm/common@21 -- # date +%s 00:01:38.671 21:16:04 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2450571 00:01:38.671 21:16:04 -- pm/common@26 -- # sleep 1 00:01:38.671 21:16:04 -- pm/common@21 -- # date +%s 00:01:38.671 21:16:04 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713986164 00:01:38.671 21:16:04 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713986164 00:01:38.671 21:16:04 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713986164 00:01:38.671 21:16:04 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713986164 00:01:38.671 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713986164_collect-bmc-pm.bmc.pm.log 00:01:38.671 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713986164_collect-vmstat.pm.log 00:01:38.671 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713986164_collect-cpu-load.pm.log 00:01:38.671 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713986164_collect-cpu-temp.pm.log 00:01:39.606 21:16:05 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:39.606 21:16:05 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:39.606 21:16:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:01:39.606 21:16:05 -- common/autotest_common.sh@10 -- # set +x 00:01:39.606 21:16:05 -- spdk/autotest.sh@59 -- # create_test_list 00:01:39.606 21:16:05 -- common/autotest_common.sh@734 -- # xtrace_disable 00:01:39.606 21:16:05 -- common/autotest_common.sh@10 -- # set +x 00:01:39.606 21:16:05 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:39.606 21:16:05 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:39.606 21:16:05 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:39.606 21:16:05 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:39.606 21:16:05 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:39.606 21:16:05 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:39.606 21:16:05 -- common/autotest_common.sh@1441 -- # uname 00:01:39.606 21:16:05 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:01:39.606 21:16:05 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:39.606 21:16:05 -- common/autotest_common.sh@1461 -- # uname 00:01:39.606 21:16:05 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:01:39.606 21:16:05 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:39.606 21:16:05 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:39.606 21:16:05 -- spdk/autotest.sh@72 -- # hash lcov 00:01:39.606 21:16:05 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:39.606 21:16:05 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:39.606 --rc lcov_branch_coverage=1 00:01:39.606 --rc lcov_function_coverage=1 00:01:39.606 --rc genhtml_branch_coverage=1 00:01:39.606 --rc genhtml_function_coverage=1 00:01:39.606 --rc genhtml_legend=1 00:01:39.606 --rc geninfo_all_blocks=1 00:01:39.606 ' 00:01:39.606 21:16:05 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:39.606 --rc lcov_branch_coverage=1 00:01:39.606 --rc lcov_function_coverage=1 00:01:39.606 --rc genhtml_branch_coverage=1 00:01:39.606 --rc genhtml_function_coverage=1 00:01:39.606 --rc genhtml_legend=1 00:01:39.606 --rc geninfo_all_blocks=1 00:01:39.606 ' 00:01:39.606 21:16:05 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:39.606 --rc lcov_branch_coverage=1 00:01:39.606 --rc lcov_function_coverage=1 00:01:39.606 --rc genhtml_branch_coverage=1 00:01:39.606 --rc genhtml_function_coverage=1 00:01:39.606 --rc genhtml_legend=1 00:01:39.606 --rc geninfo_all_blocks=1 00:01:39.606 --no-external' 00:01:39.606 21:16:05 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:39.606 --rc lcov_branch_coverage=1 00:01:39.606 --rc lcov_function_coverage=1 00:01:39.606 --rc genhtml_branch_coverage=1 00:01:39.606 --rc genhtml_function_coverage=1 00:01:39.606 --rc genhtml_legend=1 00:01:39.606 --rc geninfo_all_blocks=1 00:01:39.606 --no-external' 00:01:39.606 21:16:05 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:39.864 lcov: LCOV version 1.14 00:01:39.864 21:16:05 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:01:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:01:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:01:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:01:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:01:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:01:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:01:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:01:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:01:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:01:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:01:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:01:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:01:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:01:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:01:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:01:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:01:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:01:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:01:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:01:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:01:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:01:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:01:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:01:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:01:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:01:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:01:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:01:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:01:49.840 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:01:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:01:49.841 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:01:49.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:01:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:01:49.842 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:01:53.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:01:53.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:05.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:05.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:05.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:05.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:05.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:05.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:13.426 21:16:37 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:13.426 21:16:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:13.426 21:16:37 -- common/autotest_common.sh@10 -- # set +x 00:02:13.426 21:16:37 -- spdk/autotest.sh@91 -- # rm -f 00:02:13.426 21:16:37 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:13.426 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:13.426 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:13.426 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:13.426 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:13.426 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:13.426 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:13.426 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:13.426 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:13.426 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:13.426 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:13.426 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:13.426 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:13.426 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:13.426 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:13.426 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:13.426 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:13.426 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:13.426 21:16:39 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:13.426 21:16:39 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:13.426 21:16:39 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:13.426 21:16:39 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:13.426 21:16:39 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:13.426 21:16:39 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:13.426 21:16:39 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:13.426 21:16:39 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:13.426 21:16:39 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:13.426 21:16:39 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:13.426 21:16:39 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:13.426 21:16:39 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:13.426 21:16:39 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:13.426 21:16:39 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:13.426 21:16:39 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:13.426 No valid GPT data, bailing 00:02:13.426 21:16:39 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:13.426 21:16:39 -- scripts/common.sh@391 -- # pt= 00:02:13.426 21:16:39 -- scripts/common.sh@392 -- # return 1 00:02:13.426 21:16:39 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:13.426 1+0 records in 00:02:13.426 1+0 records out 00:02:13.426 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00212843 s, 493 MB/s 00:02:13.426 21:16:39 -- spdk/autotest.sh@118 -- # sync 00:02:13.426 21:16:39 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:13.426 21:16:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:13.426 21:16:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:15.327 21:16:40 -- spdk/autotest.sh@124 -- # uname -s 00:02:15.327 21:16:40 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:15.327 21:16:40 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:15.327 21:16:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:15.327 21:16:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:15.327 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:02:15.585 ************************************ 00:02:15.585 START TEST setup.sh 00:02:15.585 ************************************ 00:02:15.585 21:16:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:15.585 * Looking for test storage... 00:02:15.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:15.585 21:16:41 -- setup/test-setup.sh@10 -- # uname -s 00:02:15.585 21:16:41 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:15.585 21:16:41 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:15.586 21:16:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:15.586 21:16:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:15.586 21:16:41 -- common/autotest_common.sh@10 -- # set +x 00:02:15.586 ************************************ 00:02:15.586 START TEST acl 00:02:15.586 ************************************ 00:02:15.586 21:16:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:15.843 * Looking for test storage... 00:02:15.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:15.843 21:16:41 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:15.843 21:16:41 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:15.843 21:16:41 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:15.843 21:16:41 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:15.843 21:16:41 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:15.843 21:16:41 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:15.843 21:16:41 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:15.844 21:16:41 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:15.844 21:16:41 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:15.844 21:16:41 -- setup/acl.sh@12 -- # devs=() 00:02:15.844 21:16:41 -- setup/acl.sh@12 -- # declare -a devs 00:02:15.844 21:16:41 -- setup/acl.sh@13 -- # drivers=() 00:02:15.844 21:16:41 -- setup/acl.sh@13 -- # declare -A drivers 00:02:15.844 21:16:41 -- setup/acl.sh@51 -- # setup reset 00:02:15.844 21:16:41 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:15.844 21:16:41 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:17.218 21:16:42 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:17.218 21:16:42 -- setup/acl.sh@16 -- # local dev driver 00:02:17.218 21:16:42 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.218 21:16:42 -- setup/acl.sh@15 -- # setup output status 00:02:17.218 21:16:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:17.218 21:16:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:18.595 Hugepages 00:02:18.595 node hugesize free / total 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # continue 00:02:18.595 21:16:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # continue 00:02:18.595 21:16:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # continue 00:02:18.595 21:16:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.595 00:02:18.595 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # continue 00:02:18.595 21:16:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # continue 00:02:18.595 21:16:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # continue 00:02:18.595 21:16:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # continue 00:02:18.595 21:16:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # continue 00:02:18.595 21:16:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # continue 00:02:18.595 21:16:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # continue 00:02:18.595 21:16:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # continue 00:02:18.595 21:16:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # continue 00:02:18.595 21:16:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # continue 00:02:18.595 21:16:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # continue 00:02:18.595 21:16:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # continue 00:02:18.595 21:16:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # continue 00:02:18.595 21:16:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # continue 00:02:18.595 21:16:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # continue 00:02:18.595 21:16:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # continue 00:02:18.595 21:16:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.595 21:16:43 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:18.595 21:16:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:18.596 21:16:43 -- setup/acl.sh@20 -- # continue 00:02:18.596 21:16:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.596 21:16:43 -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:18.596 21:16:43 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:18.596 21:16:43 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:18.596 21:16:43 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:18.596 21:16:43 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:18.596 21:16:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:18.596 21:16:43 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:18.596 21:16:43 -- setup/acl.sh@54 -- # run_test denied denied 00:02:18.596 21:16:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:18.596 21:16:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:18.596 21:16:43 -- common/autotest_common.sh@10 -- # set +x 00:02:18.596 ************************************ 00:02:18.596 START TEST denied 00:02:18.596 ************************************ 00:02:18.596 21:16:44 -- common/autotest_common.sh@1111 -- # denied 00:02:18.596 21:16:44 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:18.596 21:16:44 -- setup/acl.sh@38 -- # setup output config 00:02:18.596 21:16:44 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:18.596 21:16:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:18.596 21:16:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:20.001 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:20.001 21:16:45 -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:20.001 21:16:45 -- setup/acl.sh@28 -- # local dev driver 00:02:20.001 21:16:45 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:20.001 21:16:45 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:20.001 21:16:45 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:20.001 21:16:45 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:20.001 21:16:45 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:20.001 21:16:45 -- setup/acl.sh@41 -- # setup reset 00:02:20.001 21:16:45 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:20.001 21:16:45 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:22.535 00:02:22.535 real 0m3.849s 00:02:22.535 user 0m1.088s 00:02:22.535 sys 0m1.848s 00:02:22.535 21:16:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:22.535 21:16:47 -- common/autotest_common.sh@10 -- # set +x 00:02:22.535 ************************************ 00:02:22.535 END TEST denied 00:02:22.535 ************************************ 00:02:22.535 21:16:47 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:22.535 21:16:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:22.535 21:16:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:22.535 21:16:47 -- common/autotest_common.sh@10 -- # set +x 00:02:22.535 ************************************ 00:02:22.535 START TEST allowed 00:02:22.535 ************************************ 00:02:22.535 21:16:48 -- common/autotest_common.sh@1111 -- # allowed 00:02:22.535 21:16:48 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:22.535 21:16:48 -- setup/acl.sh@45 -- # setup output config 00:02:22.535 21:16:48 -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:22.535 21:16:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:22.535 21:16:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:25.064 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:25.064 21:16:50 -- setup/acl.sh@47 -- # verify 00:02:25.064 21:16:50 -- setup/acl.sh@28 -- # local dev driver 00:02:25.064 21:16:50 -- setup/acl.sh@48 -- # setup reset 00:02:25.064 21:16:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:25.064 21:16:50 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:26.438 00:02:26.438 real 0m3.869s 00:02:26.438 user 0m1.012s 00:02:26.438 sys 0m1.704s 00:02:26.438 21:16:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:26.438 21:16:51 -- common/autotest_common.sh@10 -- # set +x 00:02:26.438 ************************************ 00:02:26.438 END TEST allowed 00:02:26.438 ************************************ 00:02:26.438 00:02:26.438 real 0m10.680s 00:02:26.438 user 0m3.249s 00:02:26.438 sys 0m5.420s 00:02:26.438 21:16:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:26.438 21:16:51 -- common/autotest_common.sh@10 -- # set +x 00:02:26.438 ************************************ 00:02:26.438 END TEST acl 00:02:26.438 ************************************ 00:02:26.438 21:16:51 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:26.438 21:16:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:26.438 21:16:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:26.438 21:16:51 -- common/autotest_common.sh@10 -- # set +x 00:02:26.438 ************************************ 00:02:26.438 START TEST hugepages 00:02:26.438 ************************************ 00:02:26.438 21:16:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:26.438 * Looking for test storage... 00:02:26.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:26.438 21:16:52 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:26.438 21:16:52 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:26.438 21:16:52 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:26.438 21:16:52 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:26.438 21:16:52 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:26.438 21:16:52 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:26.438 21:16:52 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:26.438 21:16:52 -- setup/common.sh@18 -- # local node= 00:02:26.438 21:16:52 -- setup/common.sh@19 -- # local var val 00:02:26.438 21:16:52 -- setup/common.sh@20 -- # local mem_f mem 00:02:26.439 21:16:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:26.439 21:16:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:26.439 21:16:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:26.439 21:16:52 -- setup/common.sh@28 -- # mapfile -t mem 00:02:26.439 21:16:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 34660352 kB' 'MemAvailable: 39819576 kB' 'Buffers: 2696 kB' 'Cached: 18857192 kB' 'SwapCached: 0 kB' 'Active: 14759092 kB' 'Inactive: 4646328 kB' 'Active(anon): 14145100 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548864 kB' 'Mapped: 187752 kB' 'Shmem: 13599568 kB' 'KReclaimable: 543340 kB' 'Slab: 936220 kB' 'SReclaimable: 543340 kB' 'SUnreclaim: 392880 kB' 'KernelStack: 12944 kB' 'PageTables: 9232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 15324396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196648 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.439 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.439 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # continue 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:26.440 21:16:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:26.440 21:16:52 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:26.440 21:16:52 -- setup/common.sh@33 -- # echo 2048 00:02:26.440 21:16:52 -- setup/common.sh@33 -- # return 0 00:02:26.440 21:16:52 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:26.440 21:16:52 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:26.440 21:16:52 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:26.440 21:16:52 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:26.440 21:16:52 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:26.440 21:16:52 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:26.440 21:16:52 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:26.440 21:16:52 -- setup/hugepages.sh@207 -- # get_nodes 00:02:26.440 21:16:52 -- setup/hugepages.sh@27 -- # local node 00:02:26.440 21:16:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:26.440 21:16:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:26.440 21:16:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:26.440 21:16:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:26.440 21:16:52 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:26.440 21:16:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:26.440 21:16:52 -- setup/hugepages.sh@208 -- # clear_hp 00:02:26.440 21:16:52 -- setup/hugepages.sh@37 -- # local node hp 00:02:26.440 21:16:52 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:26.440 21:16:52 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:26.440 21:16:52 -- setup/hugepages.sh@41 -- # echo 0 00:02:26.440 21:16:52 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:26.440 21:16:52 -- setup/hugepages.sh@41 -- # echo 0 00:02:26.699 21:16:52 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:26.699 21:16:52 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:26.699 21:16:52 -- setup/hugepages.sh@41 -- # echo 0 00:02:26.699 21:16:52 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:26.699 21:16:52 -- setup/hugepages.sh@41 -- # echo 0 00:02:26.699 21:16:52 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:26.699 21:16:52 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:26.699 21:16:52 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:26.699 21:16:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:26.699 21:16:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:26.699 21:16:52 -- common/autotest_common.sh@10 -- # set +x 00:02:26.699 ************************************ 00:02:26.699 START TEST default_setup 00:02:26.699 ************************************ 00:02:26.699 21:16:52 -- common/autotest_common.sh@1111 -- # default_setup 00:02:26.699 21:16:52 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:26.699 21:16:52 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:26.699 21:16:52 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:26.699 21:16:52 -- setup/hugepages.sh@51 -- # shift 00:02:26.699 21:16:52 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:26.699 21:16:52 -- setup/hugepages.sh@52 -- # local node_ids 00:02:26.699 21:16:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:26.699 21:16:52 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:26.699 21:16:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:26.699 21:16:52 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:26.699 21:16:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:26.699 21:16:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:26.699 21:16:52 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:26.699 21:16:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:26.699 21:16:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:26.699 21:16:52 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:26.699 21:16:52 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:26.700 21:16:52 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:26.700 21:16:52 -- setup/hugepages.sh@73 -- # return 0 00:02:26.700 21:16:52 -- setup/hugepages.sh@137 -- # setup output 00:02:26.700 21:16:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:26.700 21:16:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:28.079 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:28.079 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:28.079 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:28.079 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:28.079 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:28.079 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:28.079 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:28.079 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:28.079 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:28.080 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:28.080 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:28.080 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:28.080 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:28.080 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:28.080 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:28.080 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:29.018 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:29.018 21:16:54 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:29.018 21:16:54 -- setup/hugepages.sh@89 -- # local node 00:02:29.018 21:16:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:29.018 21:16:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:29.018 21:16:54 -- setup/hugepages.sh@92 -- # local surp 00:02:29.018 21:16:54 -- setup/hugepages.sh@93 -- # local resv 00:02:29.018 21:16:54 -- setup/hugepages.sh@94 -- # local anon 00:02:29.018 21:16:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:29.018 21:16:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:29.018 21:16:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:29.018 21:16:54 -- setup/common.sh@18 -- # local node= 00:02:29.018 21:16:54 -- setup/common.sh@19 -- # local var val 00:02:29.018 21:16:54 -- setup/common.sh@20 -- # local mem_f mem 00:02:29.018 21:16:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:29.018 21:16:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:29.018 21:16:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:29.018 21:16:54 -- setup/common.sh@28 -- # mapfile -t mem 00:02:29.018 21:16:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.018 21:16:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36774256 kB' 'MemAvailable: 41933480 kB' 'Buffers: 2696 kB' 'Cached: 18857288 kB' 'SwapCached: 0 kB' 'Active: 14784468 kB' 'Inactive: 4646328 kB' 'Active(anon): 14170476 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573508 kB' 'Mapped: 188392 kB' 'Shmem: 13599664 kB' 'KReclaimable: 543340 kB' 'Slab: 935776 kB' 'SReclaimable: 543340 kB' 'SUnreclaim: 392436 kB' 'KernelStack: 12800 kB' 'PageTables: 9144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15350692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196732 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.018 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.018 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.019 21:16:54 -- setup/common.sh@33 -- # echo 0 00:02:29.019 21:16:54 -- setup/common.sh@33 -- # return 0 00:02:29.019 21:16:54 -- setup/hugepages.sh@97 -- # anon=0 00:02:29.019 21:16:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:29.019 21:16:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:29.019 21:16:54 -- setup/common.sh@18 -- # local node= 00:02:29.019 21:16:54 -- setup/common.sh@19 -- # local var val 00:02:29.019 21:16:54 -- setup/common.sh@20 -- # local mem_f mem 00:02:29.019 21:16:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:29.019 21:16:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:29.019 21:16:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:29.019 21:16:54 -- setup/common.sh@28 -- # mapfile -t mem 00:02:29.019 21:16:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36783704 kB' 'MemAvailable: 41942928 kB' 'Buffers: 2696 kB' 'Cached: 18857292 kB' 'SwapCached: 0 kB' 'Active: 14784616 kB' 'Inactive: 4646328 kB' 'Active(anon): 14170624 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574116 kB' 'Mapped: 188652 kB' 'Shmem: 13599668 kB' 'KReclaimable: 543340 kB' 'Slab: 935784 kB' 'SReclaimable: 543340 kB' 'SUnreclaim: 392444 kB' 'KernelStack: 12800 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15350340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196684 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.019 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.019 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.020 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.020 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.021 21:16:54 -- setup/common.sh@33 -- # echo 0 00:02:29.021 21:16:54 -- setup/common.sh@33 -- # return 0 00:02:29.021 21:16:54 -- setup/hugepages.sh@99 -- # surp=0 00:02:29.021 21:16:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:29.021 21:16:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:29.021 21:16:54 -- setup/common.sh@18 -- # local node= 00:02:29.021 21:16:54 -- setup/common.sh@19 -- # local var val 00:02:29.021 21:16:54 -- setup/common.sh@20 -- # local mem_f mem 00:02:29.021 21:16:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:29.021 21:16:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:29.021 21:16:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:29.021 21:16:54 -- setup/common.sh@28 -- # mapfile -t mem 00:02:29.021 21:16:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36783372 kB' 'MemAvailable: 41942596 kB' 'Buffers: 2696 kB' 'Cached: 18857300 kB' 'SwapCached: 0 kB' 'Active: 14778368 kB' 'Inactive: 4646328 kB' 'Active(anon): 14164376 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567908 kB' 'Mapped: 187832 kB' 'Shmem: 13599676 kB' 'KReclaimable: 543340 kB' 'Slab: 935784 kB' 'SReclaimable: 543340 kB' 'SUnreclaim: 392444 kB' 'KernelStack: 12784 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15344236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196632 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.021 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.021 21:16:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.022 21:16:54 -- setup/common.sh@33 -- # echo 0 00:02:29.022 21:16:54 -- setup/common.sh@33 -- # return 0 00:02:29.022 21:16:54 -- setup/hugepages.sh@100 -- # resv=0 00:02:29.022 21:16:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:29.022 nr_hugepages=1024 00:02:29.022 21:16:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:29.022 resv_hugepages=0 00:02:29.022 21:16:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:29.022 surplus_hugepages=0 00:02:29.022 21:16:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:29.022 anon_hugepages=0 00:02:29.022 21:16:54 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:29.022 21:16:54 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:29.022 21:16:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:29.022 21:16:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:29.022 21:16:54 -- setup/common.sh@18 -- # local node= 00:02:29.022 21:16:54 -- setup/common.sh@19 -- # local var val 00:02:29.022 21:16:54 -- setup/common.sh@20 -- # local mem_f mem 00:02:29.022 21:16:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:29.022 21:16:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:29.022 21:16:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:29.022 21:16:54 -- setup/common.sh@28 -- # mapfile -t mem 00:02:29.022 21:16:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36783504 kB' 'MemAvailable: 41942728 kB' 'Buffers: 2696 kB' 'Cached: 18857324 kB' 'SwapCached: 0 kB' 'Active: 14777944 kB' 'Inactive: 4646328 kB' 'Active(anon): 14163952 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567592 kB' 'Mapped: 187892 kB' 'Shmem: 13599700 kB' 'KReclaimable: 543340 kB' 'Slab: 935904 kB' 'SReclaimable: 543340 kB' 'SUnreclaim: 392564 kB' 'KernelStack: 12832 kB' 'PageTables: 9088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15344748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196648 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.022 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.022 21:16:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.023 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.023 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.024 21:16:54 -- setup/common.sh@33 -- # echo 1024 00:02:29.024 21:16:54 -- setup/common.sh@33 -- # return 0 00:02:29.024 21:16:54 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:29.024 21:16:54 -- setup/hugepages.sh@112 -- # get_nodes 00:02:29.024 21:16:54 -- setup/hugepages.sh@27 -- # local node 00:02:29.024 21:16:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:29.024 21:16:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:29.024 21:16:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:29.024 21:16:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:29.024 21:16:54 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:29.024 21:16:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:29.024 21:16:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:29.024 21:16:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:29.024 21:16:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:29.024 21:16:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:29.024 21:16:54 -- setup/common.sh@18 -- # local node=0 00:02:29.024 21:16:54 -- setup/common.sh@19 -- # local var val 00:02:29.024 21:16:54 -- setup/common.sh@20 -- # local mem_f mem 00:02:29.024 21:16:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:29.024 21:16:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:29.024 21:16:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:29.024 21:16:54 -- setup/common.sh@28 -- # mapfile -t mem 00:02:29.024 21:16:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22024792 kB' 'MemUsed: 10805092 kB' 'SwapCached: 0 kB' 'Active: 7231636 kB' 'Inactive: 267840 kB' 'Active(anon): 6831360 kB' 'Inactive(anon): 0 kB' 'Active(file): 400276 kB' 'Inactive(file): 267840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7301368 kB' 'Mapped: 65796 kB' 'AnonPages: 201360 kB' 'Shmem: 6633252 kB' 'KernelStack: 7064 kB' 'PageTables: 4644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 282776 kB' 'Slab: 510764 kB' 'SReclaimable: 282776 kB' 'SUnreclaim: 227988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.024 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.024 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.025 21:16:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.025 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.025 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.025 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.025 21:16:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.025 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.025 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.025 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.025 21:16:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.025 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.025 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.025 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.025 21:16:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.025 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.025 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.025 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.025 21:16:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.025 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.025 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.025 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.025 21:16:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.025 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.025 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.025 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.025 21:16:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.025 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.025 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.025 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.025 21:16:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.025 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.025 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.025 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.025 21:16:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.025 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.025 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.025 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.025 21:16:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.025 21:16:54 -- setup/common.sh@32 -- # continue 00:02:29.025 21:16:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:29.025 21:16:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:29.025 21:16:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.025 21:16:54 -- setup/common.sh@33 -- # echo 0 00:02:29.025 21:16:54 -- setup/common.sh@33 -- # return 0 00:02:29.025 21:16:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:29.025 21:16:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:29.025 21:16:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:29.025 21:16:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:29.025 21:16:54 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:29.025 node0=1024 expecting 1024 00:02:29.025 21:16:54 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:29.025 00:02:29.025 real 0m2.455s 00:02:29.025 user 0m0.674s 00:02:29.025 sys 0m0.962s 00:02:29.025 21:16:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:29.025 21:16:54 -- common/autotest_common.sh@10 -- # set +x 00:02:29.025 ************************************ 00:02:29.025 END TEST default_setup 00:02:29.025 ************************************ 00:02:29.025 21:16:54 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:29.025 21:16:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:29.025 21:16:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:29.025 21:16:54 -- common/autotest_common.sh@10 -- # set +x 00:02:29.283 ************************************ 00:02:29.283 START TEST per_node_1G_alloc 00:02:29.283 ************************************ 00:02:29.283 21:16:54 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:02:29.283 21:16:54 -- setup/hugepages.sh@143 -- # local IFS=, 00:02:29.283 21:16:54 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:29.284 21:16:54 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:29.284 21:16:54 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:29.284 21:16:54 -- setup/hugepages.sh@51 -- # shift 00:02:29.284 21:16:54 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:29.284 21:16:54 -- setup/hugepages.sh@52 -- # local node_ids 00:02:29.284 21:16:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:29.284 21:16:54 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:29.284 21:16:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:29.284 21:16:54 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:29.284 21:16:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:29.284 21:16:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:29.284 21:16:54 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:29.284 21:16:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:29.284 21:16:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:29.284 21:16:54 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:29.284 21:16:54 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:29.284 21:16:54 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:29.284 21:16:54 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:29.284 21:16:54 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:29.284 21:16:54 -- setup/hugepages.sh@73 -- # return 0 00:02:29.284 21:16:54 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:29.284 21:16:54 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:29.284 21:16:54 -- setup/hugepages.sh@146 -- # setup output 00:02:29.284 21:16:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:29.284 21:16:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:30.663 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:30.663 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:30.663 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:30.663 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:30.663 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:30.663 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:30.663 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:30.663 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:30.663 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:30.663 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:30.663 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:30.663 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:30.663 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:30.663 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:30.663 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:30.663 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:30.663 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:30.663 21:16:56 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:30.663 21:16:56 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:30.663 21:16:56 -- setup/hugepages.sh@89 -- # local node 00:02:30.664 21:16:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:30.664 21:16:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:30.664 21:16:56 -- setup/hugepages.sh@92 -- # local surp 00:02:30.664 21:16:56 -- setup/hugepages.sh@93 -- # local resv 00:02:30.664 21:16:56 -- setup/hugepages.sh@94 -- # local anon 00:02:30.664 21:16:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:30.664 21:16:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:30.664 21:16:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:30.664 21:16:56 -- setup/common.sh@18 -- # local node= 00:02:30.664 21:16:56 -- setup/common.sh@19 -- # local var val 00:02:30.664 21:16:56 -- setup/common.sh@20 -- # local mem_f mem 00:02:30.664 21:16:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:30.664 21:16:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:30.664 21:16:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:30.664 21:16:56 -- setup/common.sh@28 -- # mapfile -t mem 00:02:30.664 21:16:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36793076 kB' 'MemAvailable: 41952292 kB' 'Buffers: 2696 kB' 'Cached: 18857380 kB' 'SwapCached: 0 kB' 'Active: 14778408 kB' 'Inactive: 4646328 kB' 'Active(anon): 14164416 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567876 kB' 'Mapped: 187872 kB' 'Shmem: 13599756 kB' 'KReclaimable: 543332 kB' 'Slab: 935988 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 392656 kB' 'KernelStack: 12816 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15344628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196696 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.664 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.664 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:30.665 21:16:56 -- setup/common.sh@33 -- # echo 0 00:02:30.665 21:16:56 -- setup/common.sh@33 -- # return 0 00:02:30.665 21:16:56 -- setup/hugepages.sh@97 -- # anon=0 00:02:30.665 21:16:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:30.665 21:16:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:30.665 21:16:56 -- setup/common.sh@18 -- # local node= 00:02:30.665 21:16:56 -- setup/common.sh@19 -- # local var val 00:02:30.665 21:16:56 -- setup/common.sh@20 -- # local mem_f mem 00:02:30.665 21:16:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:30.665 21:16:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:30.665 21:16:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:30.665 21:16:56 -- setup/common.sh@28 -- # mapfile -t mem 00:02:30.665 21:16:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36794384 kB' 'MemAvailable: 41953600 kB' 'Buffers: 2696 kB' 'Cached: 18857380 kB' 'SwapCached: 0 kB' 'Active: 14778808 kB' 'Inactive: 4646328 kB' 'Active(anon): 14164816 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568396 kB' 'Mapped: 187948 kB' 'Shmem: 13599756 kB' 'KReclaimable: 543332 kB' 'Slab: 936020 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 392688 kB' 'KernelStack: 12832 kB' 'PageTables: 9064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15344640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196664 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.665 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.665 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.666 21:16:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.666 21:16:56 -- setup/common.sh@33 -- # echo 0 00:02:30.666 21:16:56 -- setup/common.sh@33 -- # return 0 00:02:30.666 21:16:56 -- setup/hugepages.sh@99 -- # surp=0 00:02:30.666 21:16:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:30.666 21:16:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:30.666 21:16:56 -- setup/common.sh@18 -- # local node= 00:02:30.666 21:16:56 -- setup/common.sh@19 -- # local var val 00:02:30.666 21:16:56 -- setup/common.sh@20 -- # local mem_f mem 00:02:30.666 21:16:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:30.666 21:16:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:30.666 21:16:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:30.666 21:16:56 -- setup/common.sh@28 -- # mapfile -t mem 00:02:30.666 21:16:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.666 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36794936 kB' 'MemAvailable: 41954152 kB' 'Buffers: 2696 kB' 'Cached: 18857396 kB' 'SwapCached: 0 kB' 'Active: 14778148 kB' 'Inactive: 4646328 kB' 'Active(anon): 14164156 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567660 kB' 'Mapped: 187920 kB' 'Shmem: 13599772 kB' 'KReclaimable: 543332 kB' 'Slab: 935992 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 392660 kB' 'KernelStack: 12832 kB' 'PageTables: 9032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15344652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196648 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.667 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.667 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:30.668 21:16:56 -- setup/common.sh@33 -- # echo 0 00:02:30.668 21:16:56 -- setup/common.sh@33 -- # return 0 00:02:30.668 21:16:56 -- setup/hugepages.sh@100 -- # resv=0 00:02:30.668 21:16:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:30.668 nr_hugepages=1024 00:02:30.668 21:16:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:30.668 resv_hugepages=0 00:02:30.668 21:16:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:30.668 surplus_hugepages=0 00:02:30.668 21:16:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:30.668 anon_hugepages=0 00:02:30.668 21:16:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:30.668 21:16:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:30.668 21:16:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:30.668 21:16:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:30.668 21:16:56 -- setup/common.sh@18 -- # local node= 00:02:30.668 21:16:56 -- setup/common.sh@19 -- # local var val 00:02:30.668 21:16:56 -- setup/common.sh@20 -- # local mem_f mem 00:02:30.668 21:16:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:30.668 21:16:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:30.668 21:16:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:30.668 21:16:56 -- setup/common.sh@28 -- # mapfile -t mem 00:02:30.668 21:16:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36795188 kB' 'MemAvailable: 41954404 kB' 'Buffers: 2696 kB' 'Cached: 18857412 kB' 'SwapCached: 0 kB' 'Active: 14778320 kB' 'Inactive: 4646328 kB' 'Active(anon): 14164328 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567792 kB' 'Mapped: 187844 kB' 'Shmem: 13599788 kB' 'KReclaimable: 543332 kB' 'Slab: 936020 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 392688 kB' 'KernelStack: 12816 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15344668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196648 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.668 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.668 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.669 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.669 21:16:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:30.669 21:16:56 -- setup/common.sh@33 -- # echo 1024 00:02:30.669 21:16:56 -- setup/common.sh@33 -- # return 0 00:02:30.669 21:16:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:30.669 21:16:56 -- setup/hugepages.sh@112 -- # get_nodes 00:02:30.669 21:16:56 -- setup/hugepages.sh@27 -- # local node 00:02:30.669 21:16:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:30.669 21:16:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:30.669 21:16:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:30.669 21:16:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:30.669 21:16:56 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:30.669 21:16:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:30.670 21:16:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:30.670 21:16:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:30.670 21:16:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:30.670 21:16:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:30.670 21:16:56 -- setup/common.sh@18 -- # local node=0 00:02:30.670 21:16:56 -- setup/common.sh@19 -- # local var val 00:02:30.670 21:16:56 -- setup/common.sh@20 -- # local mem_f mem 00:02:30.670 21:16:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:30.670 21:16:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:30.670 21:16:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:30.670 21:16:56 -- setup/common.sh@28 -- # mapfile -t mem 00:02:30.670 21:16:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 23075428 kB' 'MemUsed: 9754456 kB' 'SwapCached: 0 kB' 'Active: 7233328 kB' 'Inactive: 267840 kB' 'Active(anon): 6833052 kB' 'Inactive(anon): 0 kB' 'Active(file): 400276 kB' 'Inactive(file): 267840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7301456 kB' 'Mapped: 65820 kB' 'AnonPages: 202828 kB' 'Shmem: 6633340 kB' 'KernelStack: 7080 kB' 'PageTables: 4700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 282776 kB' 'Slab: 510852 kB' 'SReclaimable: 282776 kB' 'SUnreclaim: 228076 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.670 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.670 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@33 -- # echo 0 00:02:30.671 21:16:56 -- setup/common.sh@33 -- # return 0 00:02:30.671 21:16:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:30.671 21:16:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:30.671 21:16:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:30.671 21:16:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:30.671 21:16:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:30.671 21:16:56 -- setup/common.sh@18 -- # local node=1 00:02:30.671 21:16:56 -- setup/common.sh@19 -- # local var val 00:02:30.671 21:16:56 -- setup/common.sh@20 -- # local mem_f mem 00:02:30.671 21:16:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:30.671 21:16:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:30.671 21:16:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:30.671 21:16:56 -- setup/common.sh@28 -- # mapfile -t mem 00:02:30.671 21:16:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 13720188 kB' 'MemUsed: 13991636 kB' 'SwapCached: 0 kB' 'Active: 7545012 kB' 'Inactive: 4378488 kB' 'Active(anon): 7331296 kB' 'Inactive(anon): 0 kB' 'Active(file): 213716 kB' 'Inactive(file): 4378488 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11558664 kB' 'Mapped: 122024 kB' 'AnonPages: 364928 kB' 'Shmem: 6966460 kB' 'KernelStack: 5720 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 260556 kB' 'Slab: 425168 kB' 'SReclaimable: 260556 kB' 'SUnreclaim: 164612 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.671 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.671 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.672 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.672 21:16:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.672 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.672 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.672 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.672 21:16:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.672 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.672 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.672 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.672 21:16:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.672 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.672 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.672 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.672 21:16:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.672 21:16:56 -- setup/common.sh@32 -- # continue 00:02:30.672 21:16:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:30.672 21:16:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:30.672 21:16:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:30.672 21:16:56 -- setup/common.sh@33 -- # echo 0 00:02:30.672 21:16:56 -- setup/common.sh@33 -- # return 0 00:02:30.672 21:16:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:30.672 21:16:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:30.672 21:16:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:30.672 21:16:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:30.672 21:16:56 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:30.672 node0=512 expecting 512 00:02:30.672 21:16:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:30.672 21:16:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:30.672 21:16:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:30.672 21:16:56 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:30.672 node1=512 expecting 512 00:02:30.672 21:16:56 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:30.672 00:02:30.672 real 0m1.442s 00:02:30.672 user 0m0.600s 00:02:30.672 sys 0m0.802s 00:02:30.672 21:16:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:30.672 21:16:56 -- common/autotest_common.sh@10 -- # set +x 00:02:30.672 ************************************ 00:02:30.672 END TEST per_node_1G_alloc 00:02:30.672 ************************************ 00:02:30.672 21:16:56 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:30.672 21:16:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:30.672 21:16:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:30.672 21:16:56 -- common/autotest_common.sh@10 -- # set +x 00:02:30.672 ************************************ 00:02:30.672 START TEST even_2G_alloc 00:02:30.672 ************************************ 00:02:30.672 21:16:56 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:02:30.672 21:16:56 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:30.672 21:16:56 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:30.672 21:16:56 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:30.672 21:16:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:30.672 21:16:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:30.672 21:16:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:30.672 21:16:56 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:30.672 21:16:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:30.672 21:16:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:30.672 21:16:56 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:30.672 21:16:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:30.672 21:16:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:30.672 21:16:56 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:30.672 21:16:56 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:30.672 21:16:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:30.672 21:16:56 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:30.672 21:16:56 -- setup/hugepages.sh@83 -- # : 512 00:02:30.672 21:16:56 -- setup/hugepages.sh@84 -- # : 1 00:02:30.672 21:16:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:30.672 21:16:56 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:30.672 21:16:56 -- setup/hugepages.sh@83 -- # : 0 00:02:30.672 21:16:56 -- setup/hugepages.sh@84 -- # : 0 00:02:30.672 21:16:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:30.672 21:16:56 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:30.672 21:16:56 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:30.672 21:16:56 -- setup/hugepages.sh@153 -- # setup output 00:02:30.672 21:16:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:30.672 21:16:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:32.054 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:32.054 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:32.054 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:32.054 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:32.054 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:32.054 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:32.054 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:32.054 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:32.054 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:32.054 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:32.054 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:32.054 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:32.054 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:32.054 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:32.054 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:32.054 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:32.054 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:32.054 21:16:57 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:32.054 21:16:57 -- setup/hugepages.sh@89 -- # local node 00:02:32.054 21:16:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:32.054 21:16:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:32.055 21:16:57 -- setup/hugepages.sh@92 -- # local surp 00:02:32.055 21:16:57 -- setup/hugepages.sh@93 -- # local resv 00:02:32.055 21:16:57 -- setup/hugepages.sh@94 -- # local anon 00:02:32.055 21:16:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:32.055 21:16:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:32.055 21:16:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:32.055 21:16:57 -- setup/common.sh@18 -- # local node= 00:02:32.055 21:16:57 -- setup/common.sh@19 -- # local var val 00:02:32.055 21:16:57 -- setup/common.sh@20 -- # local mem_f mem 00:02:32.055 21:16:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.055 21:16:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:32.055 21:16:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:32.055 21:16:57 -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.055 21:16:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36787904 kB' 'MemAvailable: 41947120 kB' 'Buffers: 2696 kB' 'Cached: 18857480 kB' 'SwapCached: 0 kB' 'Active: 14778688 kB' 'Inactive: 4646328 kB' 'Active(anon): 14164696 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567984 kB' 'Mapped: 187868 kB' 'Shmem: 13599856 kB' 'KReclaimable: 543332 kB' 'Slab: 936200 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 392868 kB' 'KernelStack: 12816 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15345024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196728 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.055 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.055 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.056 21:16:57 -- setup/common.sh@33 -- # echo 0 00:02:32.056 21:16:57 -- setup/common.sh@33 -- # return 0 00:02:32.056 21:16:57 -- setup/hugepages.sh@97 -- # anon=0 00:02:32.056 21:16:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:32.056 21:16:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:32.056 21:16:57 -- setup/common.sh@18 -- # local node= 00:02:32.056 21:16:57 -- setup/common.sh@19 -- # local var val 00:02:32.056 21:16:57 -- setup/common.sh@20 -- # local mem_f mem 00:02:32.056 21:16:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.056 21:16:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:32.056 21:16:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:32.056 21:16:57 -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.056 21:16:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36788340 kB' 'MemAvailable: 41947556 kB' 'Buffers: 2696 kB' 'Cached: 18857480 kB' 'SwapCached: 0 kB' 'Active: 14779364 kB' 'Inactive: 4646328 kB' 'Active(anon): 14165372 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568708 kB' 'Mapped: 187868 kB' 'Shmem: 13599856 kB' 'KReclaimable: 543332 kB' 'Slab: 936200 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 392868 kB' 'KernelStack: 12832 kB' 'PageTables: 8996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15345036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196728 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.056 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.056 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.057 21:16:57 -- setup/common.sh@33 -- # echo 0 00:02:32.057 21:16:57 -- setup/common.sh@33 -- # return 0 00:02:32.057 21:16:57 -- setup/hugepages.sh@99 -- # surp=0 00:02:32.057 21:16:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:32.057 21:16:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:32.057 21:16:57 -- setup/common.sh@18 -- # local node= 00:02:32.057 21:16:57 -- setup/common.sh@19 -- # local var val 00:02:32.057 21:16:57 -- setup/common.sh@20 -- # local mem_f mem 00:02:32.057 21:16:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.057 21:16:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:32.057 21:16:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:32.057 21:16:57 -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.057 21:16:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.057 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.057 21:16:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36792636 kB' 'MemAvailable: 41951852 kB' 'Buffers: 2696 kB' 'Cached: 18857492 kB' 'SwapCached: 0 kB' 'Active: 14778752 kB' 'Inactive: 4646328 kB' 'Active(anon): 14164760 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568112 kB' 'Mapped: 187868 kB' 'Shmem: 13599868 kB' 'KReclaimable: 543332 kB' 'Slab: 936188 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 392856 kB' 'KernelStack: 12800 kB' 'PageTables: 8832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15345048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196680 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:32.057 21:16:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.058 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.058 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.059 21:16:57 -- setup/common.sh@33 -- # echo 0 00:02:32.059 21:16:57 -- setup/common.sh@33 -- # return 0 00:02:32.059 21:16:57 -- setup/hugepages.sh@100 -- # resv=0 00:02:32.059 21:16:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:32.059 nr_hugepages=1024 00:02:32.059 21:16:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:32.059 resv_hugepages=0 00:02:32.059 21:16:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:32.059 surplus_hugepages=0 00:02:32.059 21:16:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:32.059 anon_hugepages=0 00:02:32.059 21:16:57 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:32.059 21:16:57 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:32.059 21:16:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:32.059 21:16:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:32.059 21:16:57 -- setup/common.sh@18 -- # local node= 00:02:32.059 21:16:57 -- setup/common.sh@19 -- # local var val 00:02:32.059 21:16:57 -- setup/common.sh@20 -- # local mem_f mem 00:02:32.059 21:16:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.059 21:16:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:32.059 21:16:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:32.059 21:16:57 -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.059 21:16:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.059 21:16:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36792864 kB' 'MemAvailable: 41952080 kB' 'Buffers: 2696 kB' 'Cached: 18857508 kB' 'SwapCached: 0 kB' 'Active: 14778612 kB' 'Inactive: 4646328 kB' 'Active(anon): 14164620 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567968 kB' 'Mapped: 187868 kB' 'Shmem: 13599884 kB' 'KReclaimable: 543332 kB' 'Slab: 936220 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 392888 kB' 'KernelStack: 12864 kB' 'PageTables: 9036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15345064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196680 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.059 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.059 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.060 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.060 21:16:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.060 21:16:57 -- setup/common.sh@33 -- # echo 1024 00:02:32.060 21:16:57 -- setup/common.sh@33 -- # return 0 00:02:32.060 21:16:57 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:32.060 21:16:57 -- setup/hugepages.sh@112 -- # get_nodes 00:02:32.060 21:16:57 -- setup/hugepages.sh@27 -- # local node 00:02:32.060 21:16:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:32.060 21:16:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:32.060 21:16:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:32.060 21:16:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:32.060 21:16:57 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:32.060 21:16:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:32.060 21:16:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:32.060 21:16:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:32.060 21:16:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:32.060 21:16:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:32.060 21:16:57 -- setup/common.sh@18 -- # local node=0 00:02:32.060 21:16:57 -- setup/common.sh@19 -- # local var val 00:02:32.060 21:16:57 -- setup/common.sh@20 -- # local mem_f mem 00:02:32.060 21:16:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.061 21:16:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:32.061 21:16:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:32.061 21:16:57 -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.061 21:16:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 23079028 kB' 'MemUsed: 9750856 kB' 'SwapCached: 0 kB' 'Active: 7233056 kB' 'Inactive: 267840 kB' 'Active(anon): 6832780 kB' 'Inactive(anon): 0 kB' 'Active(file): 400276 kB' 'Inactive(file): 267840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7301536 kB' 'Mapped: 65844 kB' 'AnonPages: 202524 kB' 'Shmem: 6633420 kB' 'KernelStack: 7112 kB' 'PageTables: 4712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 282776 kB' 'Slab: 510828 kB' 'SReclaimable: 282776 kB' 'SUnreclaim: 228052 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.061 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.061 21:16:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.061 21:16:57 -- setup/common.sh@33 -- # echo 0 00:02:32.061 21:16:57 -- setup/common.sh@33 -- # return 0 00:02:32.062 21:16:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:32.062 21:16:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:32.062 21:16:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:32.062 21:16:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:32.062 21:16:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:32.062 21:16:57 -- setup/common.sh@18 -- # local node=1 00:02:32.062 21:16:57 -- setup/common.sh@19 -- # local var val 00:02:32.062 21:16:57 -- setup/common.sh@20 -- # local mem_f mem 00:02:32.062 21:16:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.062 21:16:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:32.062 21:16:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:32.062 21:16:57 -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.062 21:16:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 13714216 kB' 'MemUsed: 13997608 kB' 'SwapCached: 0 kB' 'Active: 7545596 kB' 'Inactive: 4378488 kB' 'Active(anon): 7331880 kB' 'Inactive(anon): 0 kB' 'Active(file): 213716 kB' 'Inactive(file): 4378488 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11558684 kB' 'Mapped: 122024 kB' 'AnonPages: 365440 kB' 'Shmem: 6966480 kB' 'KernelStack: 5752 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 260556 kB' 'Slab: 425392 kB' 'SReclaimable: 260556 kB' 'SUnreclaim: 164836 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.062 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.062 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.063 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.063 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.063 21:16:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.063 21:16:57 -- setup/common.sh@32 -- # continue 00:02:32.063 21:16:57 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.063 21:16:57 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.063 21:16:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.063 21:16:57 -- setup/common.sh@33 -- # echo 0 00:02:32.063 21:16:57 -- setup/common.sh@33 -- # return 0 00:02:32.063 21:16:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:32.063 21:16:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:32.063 21:16:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:32.063 21:16:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:32.063 21:16:57 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:32.063 node0=512 expecting 512 00:02:32.063 21:16:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:32.063 21:16:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:32.063 21:16:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:32.063 21:16:57 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:32.063 node1=512 expecting 512 00:02:32.063 21:16:57 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:32.063 00:02:32.063 real 0m1.343s 00:02:32.063 user 0m0.561s 00:02:32.063 sys 0m0.740s 00:02:32.063 21:16:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:32.063 21:16:57 -- common/autotest_common.sh@10 -- # set +x 00:02:32.063 ************************************ 00:02:32.063 END TEST even_2G_alloc 00:02:32.063 ************************************ 00:02:32.063 21:16:57 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:32.063 21:16:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:32.063 21:16:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:32.063 21:16:57 -- common/autotest_common.sh@10 -- # set +x 00:02:32.370 ************************************ 00:02:32.370 START TEST odd_alloc 00:02:32.370 ************************************ 00:02:32.370 21:16:57 -- common/autotest_common.sh@1111 -- # odd_alloc 00:02:32.370 21:16:57 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:32.370 21:16:57 -- setup/hugepages.sh@49 -- # local size=2098176 00:02:32.370 21:16:57 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:32.370 21:16:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:32.370 21:16:57 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:32.370 21:16:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:32.370 21:16:57 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:32.370 21:16:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:32.370 21:16:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:32.370 21:16:57 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:32.370 21:16:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:32.370 21:16:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:32.370 21:16:57 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:32.370 21:16:57 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:32.370 21:16:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:32.370 21:16:57 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:32.370 21:16:57 -- setup/hugepages.sh@83 -- # : 513 00:02:32.370 21:16:57 -- setup/hugepages.sh@84 -- # : 1 00:02:32.370 21:16:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:32.370 21:16:57 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:32.370 21:16:57 -- setup/hugepages.sh@83 -- # : 0 00:02:32.370 21:16:57 -- setup/hugepages.sh@84 -- # : 0 00:02:32.370 21:16:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:32.370 21:16:57 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:32.370 21:16:57 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:32.370 21:16:57 -- setup/hugepages.sh@160 -- # setup output 00:02:32.370 21:16:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:32.370 21:16:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:33.306 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:33.306 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:33.306 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:33.306 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:33.306 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:33.306 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:33.306 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:33.306 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:33.306 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:33.306 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:33.306 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:33.306 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:33.306 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:33.306 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:33.306 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:33.306 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:33.306 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:33.575 21:16:59 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:33.575 21:16:59 -- setup/hugepages.sh@89 -- # local node 00:02:33.575 21:16:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:33.575 21:16:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:33.575 21:16:59 -- setup/hugepages.sh@92 -- # local surp 00:02:33.575 21:16:59 -- setup/hugepages.sh@93 -- # local resv 00:02:33.575 21:16:59 -- setup/hugepages.sh@94 -- # local anon 00:02:33.575 21:16:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:33.575 21:16:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:33.575 21:16:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:33.575 21:16:59 -- setup/common.sh@18 -- # local node= 00:02:33.575 21:16:59 -- setup/common.sh@19 -- # local var val 00:02:33.575 21:16:59 -- setup/common.sh@20 -- # local mem_f mem 00:02:33.575 21:16:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:33.575 21:16:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:33.575 21:16:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:33.575 21:16:59 -- setup/common.sh@28 -- # mapfile -t mem 00:02:33.575 21:16:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36778884 kB' 'MemAvailable: 41938100 kB' 'Buffers: 2696 kB' 'Cached: 18857576 kB' 'SwapCached: 0 kB' 'Active: 14780036 kB' 'Inactive: 4646328 kB' 'Active(anon): 14166044 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569752 kB' 'Mapped: 187268 kB' 'Shmem: 13599952 kB' 'KReclaimable: 543332 kB' 'Slab: 936348 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 393016 kB' 'KernelStack: 12784 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 15337548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196636 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.575 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.575 21:16:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:33.576 21:16:59 -- setup/common.sh@33 -- # echo 0 00:02:33.576 21:16:59 -- setup/common.sh@33 -- # return 0 00:02:33.576 21:16:59 -- setup/hugepages.sh@97 -- # anon=0 00:02:33.576 21:16:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:33.576 21:16:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:33.576 21:16:59 -- setup/common.sh@18 -- # local node= 00:02:33.576 21:16:59 -- setup/common.sh@19 -- # local var val 00:02:33.576 21:16:59 -- setup/common.sh@20 -- # local mem_f mem 00:02:33.576 21:16:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:33.576 21:16:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:33.576 21:16:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:33.576 21:16:59 -- setup/common.sh@28 -- # mapfile -t mem 00:02:33.576 21:16:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36778632 kB' 'MemAvailable: 41937848 kB' 'Buffers: 2696 kB' 'Cached: 18857576 kB' 'SwapCached: 0 kB' 'Active: 14781180 kB' 'Inactive: 4646328 kB' 'Active(anon): 14167188 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 570496 kB' 'Mapped: 187752 kB' 'Shmem: 13599952 kB' 'KReclaimable: 543332 kB' 'Slab: 936340 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 393008 kB' 'KernelStack: 12800 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 15337560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196584 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.576 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.576 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.577 21:16:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.577 21:16:59 -- setup/common.sh@33 -- # echo 0 00:02:33.577 21:16:59 -- setup/common.sh@33 -- # return 0 00:02:33.577 21:16:59 -- setup/hugepages.sh@99 -- # surp=0 00:02:33.577 21:16:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:33.577 21:16:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:33.577 21:16:59 -- setup/common.sh@18 -- # local node= 00:02:33.577 21:16:59 -- setup/common.sh@19 -- # local var val 00:02:33.577 21:16:59 -- setup/common.sh@20 -- # local mem_f mem 00:02:33.577 21:16:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:33.577 21:16:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:33.577 21:16:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:33.577 21:16:59 -- setup/common.sh@28 -- # mapfile -t mem 00:02:33.577 21:16:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.577 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36778896 kB' 'MemAvailable: 41938112 kB' 'Buffers: 2696 kB' 'Cached: 18857584 kB' 'SwapCached: 0 kB' 'Active: 14779296 kB' 'Inactive: 4646328 kB' 'Active(anon): 14165304 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568644 kB' 'Mapped: 187316 kB' 'Shmem: 13599960 kB' 'KReclaimable: 543332 kB' 'Slab: 936340 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 393008 kB' 'KernelStack: 12800 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 15335584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196584 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.578 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.578 21:16:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:33.579 21:16:59 -- setup/common.sh@33 -- # echo 0 00:02:33.579 21:16:59 -- setup/common.sh@33 -- # return 0 00:02:33.579 21:16:59 -- setup/hugepages.sh@100 -- # resv=0 00:02:33.579 21:16:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:33.579 nr_hugepages=1025 00:02:33.579 21:16:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:33.579 resv_hugepages=0 00:02:33.579 21:16:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:33.579 surplus_hugepages=0 00:02:33.579 21:16:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:33.579 anon_hugepages=0 00:02:33.579 21:16:59 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:33.579 21:16:59 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:33.579 21:16:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:33.579 21:16:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:33.579 21:16:59 -- setup/common.sh@18 -- # local node= 00:02:33.579 21:16:59 -- setup/common.sh@19 -- # local var val 00:02:33.579 21:16:59 -- setup/common.sh@20 -- # local mem_f mem 00:02:33.579 21:16:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:33.579 21:16:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:33.579 21:16:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:33.579 21:16:59 -- setup/common.sh@28 -- # mapfile -t mem 00:02:33.579 21:16:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.579 21:16:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36778920 kB' 'MemAvailable: 41938136 kB' 'Buffers: 2696 kB' 'Cached: 18857604 kB' 'SwapCached: 0 kB' 'Active: 14780740 kB' 'Inactive: 4646328 kB' 'Active(anon): 14166748 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 570000 kB' 'Mapped: 187748 kB' 'Shmem: 13599980 kB' 'KReclaimable: 543332 kB' 'Slab: 936340 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 393008 kB' 'KernelStack: 12784 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 15337588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196572 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.579 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.579 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.580 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.580 21:16:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.580 21:16:59 -- setup/common.sh@33 -- # echo 1025 00:02:33.580 21:16:59 -- setup/common.sh@33 -- # return 0 00:02:33.580 21:16:59 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:33.580 21:16:59 -- setup/hugepages.sh@112 -- # get_nodes 00:02:33.580 21:16:59 -- setup/hugepages.sh@27 -- # local node 00:02:33.580 21:16:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:33.580 21:16:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:33.580 21:16:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:33.580 21:16:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:33.580 21:16:59 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:33.580 21:16:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:33.580 21:16:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:33.580 21:16:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:33.580 21:16:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:33.581 21:16:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:33.581 21:16:59 -- setup/common.sh@18 -- # local node=0 00:02:33.581 21:16:59 -- setup/common.sh@19 -- # local var val 00:02:33.581 21:16:59 -- setup/common.sh@20 -- # local mem_f mem 00:02:33.581 21:16:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:33.581 21:16:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:33.581 21:16:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:33.581 21:16:59 -- setup/common.sh@28 -- # mapfile -t mem 00:02:33.581 21:16:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 23079152 kB' 'MemUsed: 9750732 kB' 'SwapCached: 0 kB' 'Active: 7231016 kB' 'Inactive: 267840 kB' 'Active(anon): 6830740 kB' 'Inactive(anon): 0 kB' 'Active(file): 400276 kB' 'Inactive(file): 267840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7301584 kB' 'Mapped: 64908 kB' 'AnonPages: 200448 kB' 'Shmem: 6633468 kB' 'KernelStack: 7080 kB' 'PageTables: 4628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 282776 kB' 'Slab: 511068 kB' 'SReclaimable: 282776 kB' 'SUnreclaim: 228292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.581 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.581 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@33 -- # echo 0 00:02:33.582 21:16:59 -- setup/common.sh@33 -- # return 0 00:02:33.582 21:16:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:33.582 21:16:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:33.582 21:16:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:33.582 21:16:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:33.582 21:16:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:33.582 21:16:59 -- setup/common.sh@18 -- # local node=1 00:02:33.582 21:16:59 -- setup/common.sh@19 -- # local var val 00:02:33.582 21:16:59 -- setup/common.sh@20 -- # local mem_f mem 00:02:33.582 21:16:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:33.582 21:16:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:33.582 21:16:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:33.582 21:16:59 -- setup/common.sh@28 -- # mapfile -t mem 00:02:33.582 21:16:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 13700272 kB' 'MemUsed: 14011552 kB' 'SwapCached: 0 kB' 'Active: 7543724 kB' 'Inactive: 4378488 kB' 'Active(anon): 7330008 kB' 'Inactive(anon): 0 kB' 'Active(file): 213716 kB' 'Inactive(file): 4378488 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11558732 kB' 'Mapped: 121924 kB' 'AnonPages: 363544 kB' 'Shmem: 6966528 kB' 'KernelStack: 5672 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 260556 kB' 'Slab: 425272 kB' 'SReclaimable: 260556 kB' 'SUnreclaim: 164716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.582 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.582 21:16:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.583 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.583 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.583 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.583 21:16:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.583 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.583 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.583 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.583 21:16:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.583 21:16:59 -- setup/common.sh@32 -- # continue 00:02:33.583 21:16:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:33.583 21:16:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:33.583 21:16:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.583 21:16:59 -- setup/common.sh@33 -- # echo 0 00:02:33.583 21:16:59 -- setup/common.sh@33 -- # return 0 00:02:33.583 21:16:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:33.583 21:16:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:33.583 21:16:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:33.583 21:16:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:33.583 21:16:59 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:33.583 node0=512 expecting 513 00:02:33.583 21:16:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:33.583 21:16:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:33.583 21:16:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:33.583 21:16:59 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:33.583 node1=513 expecting 512 00:02:33.583 21:16:59 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:33.583 00:02:33.583 real 0m1.456s 00:02:33.583 user 0m0.602s 00:02:33.583 sys 0m0.806s 00:02:33.583 21:16:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:33.583 21:16:59 -- common/autotest_common.sh@10 -- # set +x 00:02:33.583 ************************************ 00:02:33.583 END TEST odd_alloc 00:02:33.583 ************************************ 00:02:33.842 21:16:59 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:33.842 21:16:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:33.842 21:16:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:33.842 21:16:59 -- common/autotest_common.sh@10 -- # set +x 00:02:33.842 ************************************ 00:02:33.842 START TEST custom_alloc 00:02:33.842 ************************************ 00:02:33.842 21:16:59 -- common/autotest_common.sh@1111 -- # custom_alloc 00:02:33.842 21:16:59 -- setup/hugepages.sh@167 -- # local IFS=, 00:02:33.842 21:16:59 -- setup/hugepages.sh@169 -- # local node 00:02:33.842 21:16:59 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:33.842 21:16:59 -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:33.842 21:16:59 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:33.842 21:16:59 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:33.842 21:16:59 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:33.842 21:16:59 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:33.842 21:16:59 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:33.842 21:16:59 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:33.842 21:16:59 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:33.842 21:16:59 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:33.842 21:16:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:33.842 21:16:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:33.842 21:16:59 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:33.842 21:16:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:33.842 21:16:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:33.842 21:16:59 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:33.842 21:16:59 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:33.842 21:16:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:33.842 21:16:59 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:33.842 21:16:59 -- setup/hugepages.sh@83 -- # : 256 00:02:33.842 21:16:59 -- setup/hugepages.sh@84 -- # : 1 00:02:33.842 21:16:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:33.842 21:16:59 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:33.842 21:16:59 -- setup/hugepages.sh@83 -- # : 0 00:02:33.842 21:16:59 -- setup/hugepages.sh@84 -- # : 0 00:02:33.842 21:16:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:33.842 21:16:59 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:33.842 21:16:59 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:33.842 21:16:59 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:33.842 21:16:59 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:33.842 21:16:59 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:33.842 21:16:59 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:33.842 21:16:59 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:33.842 21:16:59 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:33.842 21:16:59 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:33.842 21:16:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:33.842 21:16:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:33.842 21:16:59 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:33.842 21:16:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:33.842 21:16:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:33.842 21:16:59 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:33.842 21:16:59 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:33.842 21:16:59 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:33.842 21:16:59 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:33.842 21:16:59 -- setup/hugepages.sh@78 -- # return 0 00:02:33.842 21:16:59 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:33.842 21:16:59 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:33.842 21:16:59 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:33.842 21:16:59 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:33.842 21:16:59 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:33.842 21:16:59 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:33.842 21:16:59 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:33.842 21:16:59 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:33.842 21:16:59 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:33.842 21:16:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:33.842 21:16:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:33.842 21:16:59 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:33.842 21:16:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:33.842 21:16:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:33.842 21:16:59 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:33.842 21:16:59 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:33.842 21:16:59 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:33.842 21:16:59 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:33.842 21:16:59 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:33.842 21:16:59 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:33.842 21:16:59 -- setup/hugepages.sh@78 -- # return 0 00:02:33.842 21:16:59 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:33.842 21:16:59 -- setup/hugepages.sh@187 -- # setup output 00:02:33.842 21:16:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:33.842 21:16:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:34.777 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:34.777 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:35.038 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:35.038 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:35.038 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:35.038 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:35.038 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:35.038 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:35.038 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:35.038 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:35.038 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:35.038 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:35.038 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:35.038 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:35.038 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:35.038 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:35.038 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:35.038 21:17:00 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:35.038 21:17:00 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:35.038 21:17:00 -- setup/hugepages.sh@89 -- # local node 00:02:35.038 21:17:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:35.038 21:17:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:35.038 21:17:00 -- setup/hugepages.sh@92 -- # local surp 00:02:35.038 21:17:00 -- setup/hugepages.sh@93 -- # local resv 00:02:35.038 21:17:00 -- setup/hugepages.sh@94 -- # local anon 00:02:35.038 21:17:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:35.038 21:17:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:35.038 21:17:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:35.038 21:17:00 -- setup/common.sh@18 -- # local node= 00:02:35.038 21:17:00 -- setup/common.sh@19 -- # local var val 00:02:35.038 21:17:00 -- setup/common.sh@20 -- # local mem_f mem 00:02:35.038 21:17:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.038 21:17:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.038 21:17:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.038 21:17:00 -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.038 21:17:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 35717092 kB' 'MemAvailable: 40876308 kB' 'Buffers: 2696 kB' 'Cached: 18857672 kB' 'SwapCached: 0 kB' 'Active: 14775688 kB' 'Inactive: 4646328 kB' 'Active(anon): 14161696 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565036 kB' 'Mapped: 186896 kB' 'Shmem: 13600048 kB' 'KReclaimable: 543332 kB' 'Slab: 936328 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 392996 kB' 'KernelStack: 12816 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 15331684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196712 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.038 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.038 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.039 21:17:00 -- setup/common.sh@33 -- # echo 0 00:02:35.039 21:17:00 -- setup/common.sh@33 -- # return 0 00:02:35.039 21:17:00 -- setup/hugepages.sh@97 -- # anon=0 00:02:35.039 21:17:00 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:35.039 21:17:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:35.039 21:17:00 -- setup/common.sh@18 -- # local node= 00:02:35.039 21:17:00 -- setup/common.sh@19 -- # local var val 00:02:35.039 21:17:00 -- setup/common.sh@20 -- # local mem_f mem 00:02:35.039 21:17:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.039 21:17:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.039 21:17:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.039 21:17:00 -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.039 21:17:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 35717716 kB' 'MemAvailable: 40876932 kB' 'Buffers: 2696 kB' 'Cached: 18857672 kB' 'SwapCached: 0 kB' 'Active: 14775984 kB' 'Inactive: 4646328 kB' 'Active(anon): 14161992 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565340 kB' 'Mapped: 186896 kB' 'Shmem: 13600048 kB' 'KReclaimable: 543332 kB' 'Slab: 936324 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 392992 kB' 'KernelStack: 12800 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 15331696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196664 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.039 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.039 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.040 21:17:00 -- setup/common.sh@33 -- # echo 0 00:02:35.040 21:17:00 -- setup/common.sh@33 -- # return 0 00:02:35.040 21:17:00 -- setup/hugepages.sh@99 -- # surp=0 00:02:35.040 21:17:00 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:35.040 21:17:00 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:35.040 21:17:00 -- setup/common.sh@18 -- # local node= 00:02:35.040 21:17:00 -- setup/common.sh@19 -- # local var val 00:02:35.040 21:17:00 -- setup/common.sh@20 -- # local mem_f mem 00:02:35.040 21:17:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.040 21:17:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.040 21:17:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.040 21:17:00 -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.040 21:17:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 35717216 kB' 'MemAvailable: 40876432 kB' 'Buffers: 2696 kB' 'Cached: 18857672 kB' 'SwapCached: 0 kB' 'Active: 14775580 kB' 'Inactive: 4646328 kB' 'Active(anon): 14161588 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564896 kB' 'Mapped: 186888 kB' 'Shmem: 13600048 kB' 'KReclaimable: 543332 kB' 'Slab: 936316 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 392984 kB' 'KernelStack: 12816 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 15331712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196664 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.040 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.040 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.041 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.041 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.041 21:17:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.041 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.041 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.041 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.041 21:17:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.041 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.041 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.041 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.041 21:17:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.041 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.041 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.041 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.041 21:17:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.041 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.041 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.041 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.041 21:17:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.041 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.041 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.041 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.041 21:17:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.041 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.041 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.041 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.302 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.302 21:17:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.302 21:17:00 -- setup/common.sh@33 -- # echo 0 00:02:35.302 21:17:00 -- setup/common.sh@33 -- # return 0 00:02:35.302 21:17:00 -- setup/hugepages.sh@100 -- # resv=0 00:02:35.302 21:17:00 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:35.302 nr_hugepages=1536 00:02:35.302 21:17:00 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:35.302 resv_hugepages=0 00:02:35.302 21:17:00 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:35.302 surplus_hugepages=0 00:02:35.302 21:17:00 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:35.302 anon_hugepages=0 00:02:35.302 21:17:00 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:35.302 21:17:00 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:35.302 21:17:00 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:35.303 21:17:00 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:35.303 21:17:00 -- setup/common.sh@18 -- # local node= 00:02:35.303 21:17:00 -- setup/common.sh@19 -- # local var val 00:02:35.303 21:17:00 -- setup/common.sh@20 -- # local mem_f mem 00:02:35.303 21:17:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.303 21:17:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.303 21:17:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.303 21:17:00 -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.303 21:17:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 35718892 kB' 'MemAvailable: 40878108 kB' 'Buffers: 2696 kB' 'Cached: 18857700 kB' 'SwapCached: 0 kB' 'Active: 14775920 kB' 'Inactive: 4646328 kB' 'Active(anon): 14161928 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565196 kB' 'Mapped: 186888 kB' 'Shmem: 13600076 kB' 'KReclaimable: 543332 kB' 'Slab: 936316 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 392984 kB' 'KernelStack: 12816 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 15331724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196664 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.303 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.303 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.304 21:17:00 -- setup/common.sh@33 -- # echo 1536 00:02:35.304 21:17:00 -- setup/common.sh@33 -- # return 0 00:02:35.304 21:17:00 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:35.304 21:17:00 -- setup/hugepages.sh@112 -- # get_nodes 00:02:35.304 21:17:00 -- setup/hugepages.sh@27 -- # local node 00:02:35.304 21:17:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:35.304 21:17:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:35.304 21:17:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:35.304 21:17:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:35.304 21:17:00 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:35.304 21:17:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:35.304 21:17:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:35.304 21:17:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:35.304 21:17:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:35.304 21:17:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:35.304 21:17:00 -- setup/common.sh@18 -- # local node=0 00:02:35.304 21:17:00 -- setup/common.sh@19 -- # local var val 00:02:35.304 21:17:00 -- setup/common.sh@20 -- # local mem_f mem 00:02:35.304 21:17:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.304 21:17:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:35.304 21:17:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:35.304 21:17:00 -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.304 21:17:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 23072264 kB' 'MemUsed: 9757620 kB' 'SwapCached: 0 kB' 'Active: 7231504 kB' 'Inactive: 267840 kB' 'Active(anon): 6831228 kB' 'Inactive(anon): 0 kB' 'Active(file): 400276 kB' 'Inactive(file): 267840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7301584 kB' 'Mapped: 64964 kB' 'AnonPages: 201032 kB' 'Shmem: 6633468 kB' 'KernelStack: 7096 kB' 'PageTables: 4696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 282776 kB' 'Slab: 510988 kB' 'SReclaimable: 282776 kB' 'SUnreclaim: 228212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.304 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.304 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@33 -- # echo 0 00:02:35.305 21:17:00 -- setup/common.sh@33 -- # return 0 00:02:35.305 21:17:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:35.305 21:17:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:35.305 21:17:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:35.305 21:17:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:35.305 21:17:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:35.305 21:17:00 -- setup/common.sh@18 -- # local node=1 00:02:35.305 21:17:00 -- setup/common.sh@19 -- # local var val 00:02:35.305 21:17:00 -- setup/common.sh@20 -- # local mem_f mem 00:02:35.305 21:17:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.305 21:17:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:35.305 21:17:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:35.305 21:17:00 -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.305 21:17:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 12647732 kB' 'MemUsed: 15064092 kB' 'SwapCached: 0 kB' 'Active: 7544188 kB' 'Inactive: 4378488 kB' 'Active(anon): 7330472 kB' 'Inactive(anon): 0 kB' 'Active(file): 213716 kB' 'Inactive(file): 4378488 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11558840 kB' 'Mapped: 121924 kB' 'AnonPages: 363916 kB' 'Shmem: 6966636 kB' 'KernelStack: 5704 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 260556 kB' 'Slab: 425328 kB' 'SReclaimable: 260556 kB' 'SUnreclaim: 164772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.305 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.305 21:17:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # continue 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:35.306 21:17:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:35.306 21:17:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.306 21:17:00 -- setup/common.sh@33 -- # echo 0 00:02:35.306 21:17:00 -- setup/common.sh@33 -- # return 0 00:02:35.306 21:17:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:35.306 21:17:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:35.306 21:17:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:35.306 21:17:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:35.306 21:17:00 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:35.306 node0=512 expecting 512 00:02:35.306 21:17:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:35.306 21:17:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:35.306 21:17:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:35.306 21:17:00 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:35.306 node1=1024 expecting 1024 00:02:35.306 21:17:00 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:35.306 00:02:35.306 real 0m1.424s 00:02:35.306 user 0m0.585s 00:02:35.306 sys 0m0.794s 00:02:35.306 21:17:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:35.307 21:17:00 -- common/autotest_common.sh@10 -- # set +x 00:02:35.307 ************************************ 00:02:35.307 END TEST custom_alloc 00:02:35.307 ************************************ 00:02:35.307 21:17:00 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:35.307 21:17:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:35.307 21:17:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:35.307 21:17:00 -- common/autotest_common.sh@10 -- # set +x 00:02:35.307 ************************************ 00:02:35.307 START TEST no_shrink_alloc 00:02:35.307 ************************************ 00:02:35.307 21:17:00 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:02:35.307 21:17:00 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:35.307 21:17:00 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:35.307 21:17:00 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:35.307 21:17:00 -- setup/hugepages.sh@51 -- # shift 00:02:35.307 21:17:00 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:35.307 21:17:00 -- setup/hugepages.sh@52 -- # local node_ids 00:02:35.307 21:17:00 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:35.307 21:17:00 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:35.307 21:17:00 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:35.307 21:17:00 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:35.307 21:17:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:35.307 21:17:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:35.307 21:17:00 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:35.307 21:17:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:35.307 21:17:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:35.307 21:17:00 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:35.307 21:17:00 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:35.307 21:17:00 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:35.307 21:17:00 -- setup/hugepages.sh@73 -- # return 0 00:02:35.307 21:17:00 -- setup/hugepages.sh@198 -- # setup output 00:02:35.307 21:17:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:35.307 21:17:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:36.686 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:36.686 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:36.686 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:36.686 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:36.686 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:36.686 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:36.686 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:36.686 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:36.686 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:36.686 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:36.686 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:36.686 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:36.686 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:36.686 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:36.686 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:36.686 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:36.686 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:36.686 21:17:02 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:36.686 21:17:02 -- setup/hugepages.sh@89 -- # local node 00:02:36.686 21:17:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:36.686 21:17:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:36.686 21:17:02 -- setup/hugepages.sh@92 -- # local surp 00:02:36.686 21:17:02 -- setup/hugepages.sh@93 -- # local resv 00:02:36.686 21:17:02 -- setup/hugepages.sh@94 -- # local anon 00:02:36.686 21:17:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:36.686 21:17:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:36.686 21:17:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:36.686 21:17:02 -- setup/common.sh@18 -- # local node= 00:02:36.686 21:17:02 -- setup/common.sh@19 -- # local var val 00:02:36.686 21:17:02 -- setup/common.sh@20 -- # local mem_f mem 00:02:36.686 21:17:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.686 21:17:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.686 21:17:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.686 21:17:02 -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.686 21:17:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.686 21:17:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36713824 kB' 'MemAvailable: 41873040 kB' 'Buffers: 2696 kB' 'Cached: 18857768 kB' 'SwapCached: 0 kB' 'Active: 14776216 kB' 'Inactive: 4646328 kB' 'Active(anon): 14162224 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565276 kB' 'Mapped: 186920 kB' 'Shmem: 13600144 kB' 'KReclaimable: 543332 kB' 'Slab: 935844 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 392512 kB' 'KernelStack: 12784 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15331848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196664 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.686 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.686 21:17:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.687 21:17:02 -- setup/common.sh@33 -- # echo 0 00:02:36.687 21:17:02 -- setup/common.sh@33 -- # return 0 00:02:36.687 21:17:02 -- setup/hugepages.sh@97 -- # anon=0 00:02:36.687 21:17:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:36.687 21:17:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:36.687 21:17:02 -- setup/common.sh@18 -- # local node= 00:02:36.687 21:17:02 -- setup/common.sh@19 -- # local var val 00:02:36.687 21:17:02 -- setup/common.sh@20 -- # local mem_f mem 00:02:36.687 21:17:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.687 21:17:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.687 21:17:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.687 21:17:02 -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.687 21:17:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36714260 kB' 'MemAvailable: 41873476 kB' 'Buffers: 2696 kB' 'Cached: 18857768 kB' 'SwapCached: 0 kB' 'Active: 14777260 kB' 'Inactive: 4646328 kB' 'Active(anon): 14163268 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566416 kB' 'Mapped: 187032 kB' 'Shmem: 13600144 kB' 'KReclaimable: 543332 kB' 'Slab: 935844 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 392512 kB' 'KernelStack: 12832 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15331860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196632 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.687 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.687 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.688 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.688 21:17:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.688 21:17:02 -- setup/common.sh@33 -- # echo 0 00:02:36.688 21:17:02 -- setup/common.sh@33 -- # return 0 00:02:36.688 21:17:02 -- setup/hugepages.sh@99 -- # surp=0 00:02:36.689 21:17:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:36.689 21:17:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:36.689 21:17:02 -- setup/common.sh@18 -- # local node= 00:02:36.689 21:17:02 -- setup/common.sh@19 -- # local var val 00:02:36.689 21:17:02 -- setup/common.sh@20 -- # local mem_f mem 00:02:36.689 21:17:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.689 21:17:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.689 21:17:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.689 21:17:02 -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.689 21:17:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36714424 kB' 'MemAvailable: 41873640 kB' 'Buffers: 2696 kB' 'Cached: 18857772 kB' 'SwapCached: 0 kB' 'Active: 14776536 kB' 'Inactive: 4646328 kB' 'Active(anon): 14162544 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565620 kB' 'Mapped: 186912 kB' 'Shmem: 13600148 kB' 'KReclaimable: 543332 kB' 'Slab: 935844 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 392512 kB' 'KernelStack: 12800 kB' 'PageTables: 8704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15333268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196632 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.689 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.689 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.690 21:17:02 -- setup/common.sh@33 -- # echo 0 00:02:36.690 21:17:02 -- setup/common.sh@33 -- # return 0 00:02:36.690 21:17:02 -- setup/hugepages.sh@100 -- # resv=0 00:02:36.690 21:17:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:36.690 nr_hugepages=1024 00:02:36.690 21:17:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:36.690 resv_hugepages=0 00:02:36.690 21:17:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:36.690 surplus_hugepages=0 00:02:36.690 21:17:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:36.690 anon_hugepages=0 00:02:36.690 21:17:02 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:36.690 21:17:02 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:36.690 21:17:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:36.690 21:17:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:36.690 21:17:02 -- setup/common.sh@18 -- # local node= 00:02:36.690 21:17:02 -- setup/common.sh@19 -- # local var val 00:02:36.690 21:17:02 -- setup/common.sh@20 -- # local mem_f mem 00:02:36.690 21:17:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.690 21:17:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.690 21:17:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.690 21:17:02 -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.690 21:17:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36715308 kB' 'MemAvailable: 41874524 kB' 'Buffers: 2696 kB' 'Cached: 18857788 kB' 'SwapCached: 0 kB' 'Active: 14776264 kB' 'Inactive: 4646328 kB' 'Active(anon): 14162272 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565328 kB' 'Mapped: 186912 kB' 'Shmem: 13600164 kB' 'KReclaimable: 543332 kB' 'Slab: 935844 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 392512 kB' 'KernelStack: 12832 kB' 'PageTables: 8916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15334296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196792 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.690 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.690 21:17:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.691 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.951 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.951 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.951 21:17:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.951 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.951 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.951 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.951 21:17:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.951 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.952 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.952 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.953 21:17:02 -- setup/common.sh@33 -- # echo 1024 00:02:36.953 21:17:02 -- setup/common.sh@33 -- # return 0 00:02:36.953 21:17:02 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:36.953 21:17:02 -- setup/hugepages.sh@112 -- # get_nodes 00:02:36.953 21:17:02 -- setup/hugepages.sh@27 -- # local node 00:02:36.953 21:17:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.953 21:17:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:36.953 21:17:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.953 21:17:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:36.953 21:17:02 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:36.953 21:17:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:36.953 21:17:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:36.953 21:17:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:36.953 21:17:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:36.953 21:17:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:36.953 21:17:02 -- setup/common.sh@18 -- # local node=0 00:02:36.953 21:17:02 -- setup/common.sh@19 -- # local var val 00:02:36.953 21:17:02 -- setup/common.sh@20 -- # local mem_f mem 00:02:36.953 21:17:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.953 21:17:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:36.953 21:17:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:36.953 21:17:02 -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.953 21:17:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22003504 kB' 'MemUsed: 10826380 kB' 'SwapCached: 0 kB' 'Active: 7232260 kB' 'Inactive: 267840 kB' 'Active(anon): 6831984 kB' 'Inactive(anon): 0 kB' 'Active(file): 400276 kB' 'Inactive(file): 267840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7301640 kB' 'Mapped: 64972 kB' 'AnonPages: 201596 kB' 'Shmem: 6633524 kB' 'KernelStack: 7000 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 282776 kB' 'Slab: 510820 kB' 'SReclaimable: 282776 kB' 'SUnreclaim: 228044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.953 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.953 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.954 21:17:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.954 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.954 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.954 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.954 21:17:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.954 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.954 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.954 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.954 21:17:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.954 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.954 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.954 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.954 21:17:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.954 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.954 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.954 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.954 21:17:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.954 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.954 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.954 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.954 21:17:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.954 21:17:02 -- setup/common.sh@32 -- # continue 00:02:36.954 21:17:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.954 21:17:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.954 21:17:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.954 21:17:02 -- setup/common.sh@33 -- # echo 0 00:02:36.954 21:17:02 -- setup/common.sh@33 -- # return 0 00:02:36.954 21:17:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:36.954 21:17:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:36.954 21:17:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:36.954 21:17:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:36.954 21:17:02 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:36.954 node0=1024 expecting 1024 00:02:36.954 21:17:02 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:36.954 21:17:02 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:36.954 21:17:02 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:36.954 21:17:02 -- setup/hugepages.sh@202 -- # setup output 00:02:36.954 21:17:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:36.954 21:17:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:37.889 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:37.889 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:37.889 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:37.889 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:37.889 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:37.889 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:37.889 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:37.889 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:37.889 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:37.889 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:37.889 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:37.889 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:37.889 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:37.889 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:37.889 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:37.889 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:37.889 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:38.151 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:38.151 21:17:03 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:38.151 21:17:03 -- setup/hugepages.sh@89 -- # local node 00:02:38.151 21:17:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:38.151 21:17:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:38.151 21:17:03 -- setup/hugepages.sh@92 -- # local surp 00:02:38.151 21:17:03 -- setup/hugepages.sh@93 -- # local resv 00:02:38.151 21:17:03 -- setup/hugepages.sh@94 -- # local anon 00:02:38.151 21:17:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:38.151 21:17:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:38.151 21:17:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:38.151 21:17:03 -- setup/common.sh@18 -- # local node= 00:02:38.151 21:17:03 -- setup/common.sh@19 -- # local var val 00:02:38.151 21:17:03 -- setup/common.sh@20 -- # local mem_f mem 00:02:38.151 21:17:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.151 21:17:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.151 21:17:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.151 21:17:03 -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.151 21:17:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 21:17:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36719624 kB' 'MemAvailable: 41878840 kB' 'Buffers: 2696 kB' 'Cached: 18857848 kB' 'SwapCached: 0 kB' 'Active: 14777956 kB' 'Inactive: 4646328 kB' 'Active(anon): 14163964 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566920 kB' 'Mapped: 186924 kB' 'Shmem: 13600224 kB' 'KReclaimable: 543332 kB' 'Slab: 935628 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 392296 kB' 'KernelStack: 12832 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15333020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196744 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.151 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.152 21:17:03 -- setup/common.sh@33 -- # echo 0 00:02:38.152 21:17:03 -- setup/common.sh@33 -- # return 0 00:02:38.152 21:17:03 -- setup/hugepages.sh@97 -- # anon=0 00:02:38.152 21:17:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:38.152 21:17:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.152 21:17:03 -- setup/common.sh@18 -- # local node= 00:02:38.152 21:17:03 -- setup/common.sh@19 -- # local var val 00:02:38.152 21:17:03 -- setup/common.sh@20 -- # local mem_f mem 00:02:38.152 21:17:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.152 21:17:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.152 21:17:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.152 21:17:03 -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.152 21:17:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36719112 kB' 'MemAvailable: 41878328 kB' 'Buffers: 2696 kB' 'Cached: 18857848 kB' 'SwapCached: 0 kB' 'Active: 14780396 kB' 'Inactive: 4646328 kB' 'Active(anon): 14166404 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569332 kB' 'Mapped: 187360 kB' 'Shmem: 13600224 kB' 'KReclaimable: 543332 kB' 'Slab: 935604 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 392272 kB' 'KernelStack: 12832 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15335540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196680 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.152 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 21:17:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.154 21:17:03 -- setup/common.sh@33 -- # echo 0 00:02:38.154 21:17:03 -- setup/common.sh@33 -- # return 0 00:02:38.154 21:17:03 -- setup/hugepages.sh@99 -- # surp=0 00:02:38.154 21:17:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:38.154 21:17:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:38.154 21:17:03 -- setup/common.sh@18 -- # local node= 00:02:38.154 21:17:03 -- setup/common.sh@19 -- # local var val 00:02:38.154 21:17:03 -- setup/common.sh@20 -- # local mem_f mem 00:02:38.154 21:17:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.154 21:17:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.154 21:17:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.154 21:17:03 -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.154 21:17:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36717776 kB' 'MemAvailable: 41876992 kB' 'Buffers: 2696 kB' 'Cached: 18857864 kB' 'SwapCached: 0 kB' 'Active: 14782092 kB' 'Inactive: 4646328 kB' 'Active(anon): 14168100 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571004 kB' 'Mapped: 187356 kB' 'Shmem: 13600240 kB' 'KReclaimable: 543332 kB' 'Slab: 935644 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 392312 kB' 'KernelStack: 12832 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15338076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196700 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 21:17:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.155 21:17:03 -- setup/common.sh@33 -- # echo 0 00:02:38.155 21:17:03 -- setup/common.sh@33 -- # return 0 00:02:38.155 21:17:03 -- setup/hugepages.sh@100 -- # resv=0 00:02:38.155 21:17:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:38.155 nr_hugepages=1024 00:02:38.155 21:17:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:38.155 resv_hugepages=0 00:02:38.155 21:17:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:38.155 surplus_hugepages=0 00:02:38.155 21:17:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:38.155 anon_hugepages=0 00:02:38.155 21:17:03 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:38.155 21:17:03 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:38.155 21:17:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:38.155 21:17:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:38.155 21:17:03 -- setup/common.sh@18 -- # local node= 00:02:38.155 21:17:03 -- setup/common.sh@19 -- # local var val 00:02:38.155 21:17:03 -- setup/common.sh@20 -- # local mem_f mem 00:02:38.155 21:17:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.155 21:17:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.155 21:17:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.155 21:17:03 -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.155 21:17:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36717608 kB' 'MemAvailable: 41876824 kB' 'Buffers: 2696 kB' 'Cached: 18857864 kB' 'SwapCached: 0 kB' 'Active: 14778212 kB' 'Inactive: 4646328 kB' 'Active(anon): 14164220 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567136 kB' 'Mapped: 187788 kB' 'Shmem: 13600240 kB' 'KReclaimable: 543332 kB' 'Slab: 935644 kB' 'SReclaimable: 543332 kB' 'SUnreclaim: 392312 kB' 'KernelStack: 12816 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15333984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196712 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 26435584 kB' 'DirectMap1G: 40894464 kB' 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 21:17:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 21:17:03 -- setup/common.sh@33 -- # echo 1024 00:02:38.157 21:17:03 -- setup/common.sh@33 -- # return 0 00:02:38.157 21:17:03 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:38.157 21:17:03 -- setup/hugepages.sh@112 -- # get_nodes 00:02:38.157 21:17:03 -- setup/hugepages.sh@27 -- # local node 00:02:38.157 21:17:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.157 21:17:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:38.157 21:17:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.157 21:17:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:38.157 21:17:03 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:38.157 21:17:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:38.157 21:17:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:38.157 21:17:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:38.157 21:17:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:38.157 21:17:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.157 21:17:03 -- setup/common.sh@18 -- # local node=0 00:02:38.157 21:17:03 -- setup/common.sh@19 -- # local var val 00:02:38.157 21:17:03 -- setup/common.sh@20 -- # local mem_f mem 00:02:38.157 21:17:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.157 21:17:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:38.157 21:17:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:38.157 21:17:03 -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.157 21:17:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21987240 kB' 'MemUsed: 10842644 kB' 'SwapCached: 0 kB' 'Active: 7236988 kB' 'Inactive: 267840 kB' 'Active(anon): 6836712 kB' 'Inactive(anon): 0 kB' 'Active(file): 400276 kB' 'Inactive(file): 267840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7301704 kB' 'Mapped: 64980 kB' 'AnonPages: 206248 kB' 'Shmem: 6633588 kB' 'KernelStack: 7064 kB' 'PageTables: 4572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 282776 kB' 'Slab: 510684 kB' 'SReclaimable: 282776 kB' 'SUnreclaim: 227908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.157 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.158 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 21:17:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.158 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 21:17:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.158 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 21:17:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.158 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 21:17:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.158 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 21:17:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.158 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 21:17:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.158 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 21:17:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.158 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 21:17:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.158 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 21:17:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.158 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 21:17:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 21:17:03 -- setup/common.sh@32 -- # continue 00:02:38.158 21:17:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 21:17:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 21:17:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 21:17:03 -- setup/common.sh@33 -- # echo 0 00:02:38.158 21:17:03 -- setup/common.sh@33 -- # return 0 00:02:38.158 21:17:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:38.158 21:17:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:38.158 21:17:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:38.158 21:17:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:38.158 21:17:03 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:38.158 node0=1024 expecting 1024 00:02:38.158 21:17:03 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:38.158 00:02:38.158 real 0m2.852s 00:02:38.158 user 0m1.187s 00:02:38.158 sys 0m1.588s 00:02:38.158 21:17:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:38.158 21:17:03 -- common/autotest_common.sh@10 -- # set +x 00:02:38.158 ************************************ 00:02:38.158 END TEST no_shrink_alloc 00:02:38.158 ************************************ 00:02:38.158 21:17:03 -- setup/hugepages.sh@217 -- # clear_hp 00:02:38.158 21:17:03 -- setup/hugepages.sh@37 -- # local node hp 00:02:38.158 21:17:03 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:38.158 21:17:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:38.158 21:17:03 -- setup/hugepages.sh@41 -- # echo 0 00:02:38.158 21:17:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:38.158 21:17:03 -- setup/hugepages.sh@41 -- # echo 0 00:02:38.158 21:17:03 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:38.158 21:17:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:38.158 21:17:03 -- setup/hugepages.sh@41 -- # echo 0 00:02:38.158 21:17:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:38.158 21:17:03 -- setup/hugepages.sh@41 -- # echo 0 00:02:38.158 21:17:03 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:38.158 21:17:03 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:38.158 00:02:38.158 real 0m11.757s 00:02:38.158 user 0m4.501s 00:02:38.158 sys 0m6.135s 00:02:38.158 21:17:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:38.158 21:17:03 -- common/autotest_common.sh@10 -- # set +x 00:02:38.158 ************************************ 00:02:38.158 END TEST hugepages 00:02:38.158 ************************************ 00:02:38.158 21:17:03 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:38.158 21:17:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:38.158 21:17:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:38.158 21:17:03 -- common/autotest_common.sh@10 -- # set +x 00:02:38.416 ************************************ 00:02:38.416 START TEST driver 00:02:38.416 ************************************ 00:02:38.416 21:17:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:38.416 * Looking for test storage... 00:02:38.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:38.416 21:17:03 -- setup/driver.sh@68 -- # setup reset 00:02:38.416 21:17:03 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:38.416 21:17:03 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:40.947 21:17:06 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:40.947 21:17:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:40.947 21:17:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:40.947 21:17:06 -- common/autotest_common.sh@10 -- # set +x 00:02:40.947 ************************************ 00:02:40.947 START TEST guess_driver 00:02:40.947 ************************************ 00:02:40.948 21:17:06 -- common/autotest_common.sh@1111 -- # guess_driver 00:02:40.948 21:17:06 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:40.948 21:17:06 -- setup/driver.sh@47 -- # local fail=0 00:02:40.948 21:17:06 -- setup/driver.sh@49 -- # pick_driver 00:02:40.948 21:17:06 -- setup/driver.sh@36 -- # vfio 00:02:40.948 21:17:06 -- setup/driver.sh@21 -- # local iommu_grups 00:02:40.948 21:17:06 -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:40.948 21:17:06 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:40.948 21:17:06 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:40.948 21:17:06 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:40.948 21:17:06 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:02:40.948 21:17:06 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:40.948 21:17:06 -- setup/driver.sh@14 -- # mod vfio_pci 00:02:40.948 21:17:06 -- setup/driver.sh@12 -- # dep vfio_pci 00:02:40.948 21:17:06 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:40.948 21:17:06 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:40.948 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:40.948 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:40.948 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:40.948 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:40.948 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:40.948 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:40.948 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:40.948 21:17:06 -- setup/driver.sh@30 -- # return 0 00:02:40.948 21:17:06 -- setup/driver.sh@37 -- # echo vfio-pci 00:02:40.948 21:17:06 -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:40.948 21:17:06 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:40.948 21:17:06 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:40.948 Looking for driver=vfio-pci 00:02:40.948 21:17:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:40.948 21:17:06 -- setup/driver.sh@45 -- # setup output config 00:02:40.948 21:17:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:40.948 21:17:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:42.324 21:17:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:42.324 21:17:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:42.324 21:17:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:42.324 21:17:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:42.324 21:17:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:42.324 21:17:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:42.324 21:17:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:42.324 21:17:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:42.324 21:17:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:42.324 21:17:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:42.324 21:17:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:42.324 21:17:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:42.324 21:17:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:42.324 21:17:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:42.324 21:17:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:42.324 21:17:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:42.324 21:17:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:42.324 21:17:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:42.324 21:17:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:42.324 21:17:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:42.324 21:17:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:42.324 21:17:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:42.324 21:17:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:42.324 21:17:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:42.324 21:17:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:42.324 21:17:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:42.324 21:17:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:42.324 21:17:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:42.324 21:17:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:42.324 21:17:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:42.324 21:17:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:42.324 21:17:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:42.324 21:17:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:42.324 21:17:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:42.324 21:17:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:42.324 21:17:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:42.324 21:17:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:42.324 21:17:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:42.324 21:17:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:42.324 21:17:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:42.324 21:17:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:42.324 21:17:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:42.324 21:17:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:42.324 21:17:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:42.324 21:17:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:42.324 21:17:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:42.324 21:17:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:42.324 21:17:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:43.260 21:17:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:43.260 21:17:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:43.260 21:17:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:43.260 21:17:08 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:02:43.260 21:17:08 -- setup/driver.sh@65 -- # setup reset 00:02:43.260 21:17:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:43.260 21:17:08 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:45.796 00:02:45.796 real 0m4.737s 00:02:45.796 user 0m1.064s 00:02:45.796 sys 0m1.807s 00:02:45.796 21:17:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:45.796 21:17:11 -- common/autotest_common.sh@10 -- # set +x 00:02:45.796 ************************************ 00:02:45.796 END TEST guess_driver 00:02:45.796 ************************************ 00:02:45.796 00:02:45.796 real 0m7.382s 00:02:45.796 user 0m1.664s 00:02:45.796 sys 0m2.867s 00:02:45.796 21:17:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:45.796 21:17:11 -- common/autotest_common.sh@10 -- # set +x 00:02:45.796 ************************************ 00:02:45.796 END TEST driver 00:02:45.796 ************************************ 00:02:45.796 21:17:11 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:45.796 21:17:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:45.796 21:17:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:45.796 21:17:11 -- common/autotest_common.sh@10 -- # set +x 00:02:45.796 ************************************ 00:02:45.796 START TEST devices 00:02:45.796 ************************************ 00:02:45.796 21:17:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:45.796 * Looking for test storage... 00:02:45.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:45.796 21:17:11 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:02:45.796 21:17:11 -- setup/devices.sh@192 -- # setup reset 00:02:45.796 21:17:11 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:45.796 21:17:11 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:47.696 21:17:12 -- setup/devices.sh@194 -- # get_zoned_devs 00:02:47.697 21:17:12 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:47.697 21:17:12 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:47.697 21:17:12 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:47.697 21:17:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:47.697 21:17:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:47.697 21:17:12 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:47.697 21:17:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:47.697 21:17:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:47.697 21:17:12 -- setup/devices.sh@196 -- # blocks=() 00:02:47.697 21:17:12 -- setup/devices.sh@196 -- # declare -a blocks 00:02:47.697 21:17:12 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:02:47.697 21:17:12 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:02:47.697 21:17:12 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:02:47.697 21:17:12 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:02:47.697 21:17:12 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:02:47.697 21:17:12 -- setup/devices.sh@201 -- # ctrl=nvme0 00:02:47.697 21:17:12 -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:02:47.697 21:17:12 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:47.697 21:17:12 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:02:47.697 21:17:12 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:02:47.697 21:17:12 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:02:47.697 No valid GPT data, bailing 00:02:47.697 21:17:12 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:47.697 21:17:12 -- scripts/common.sh@391 -- # pt= 00:02:47.697 21:17:12 -- scripts/common.sh@392 -- # return 1 00:02:47.697 21:17:12 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:02:47.697 21:17:12 -- setup/common.sh@76 -- # local dev=nvme0n1 00:02:47.697 21:17:12 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:02:47.697 21:17:12 -- setup/common.sh@80 -- # echo 1000204886016 00:02:47.697 21:17:12 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:02:47.697 21:17:12 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:02:47.697 21:17:12 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:02:47.697 21:17:12 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:02:47.697 21:17:12 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:02:47.697 21:17:12 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:02:47.697 21:17:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:47.697 21:17:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:47.697 21:17:12 -- common/autotest_common.sh@10 -- # set +x 00:02:47.697 ************************************ 00:02:47.697 START TEST nvme_mount 00:02:47.697 ************************************ 00:02:47.697 21:17:13 -- common/autotest_common.sh@1111 -- # nvme_mount 00:02:47.697 21:17:13 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:02:47.697 21:17:13 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:02:47.697 21:17:13 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:47.697 21:17:13 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:47.697 21:17:13 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:02:47.697 21:17:13 -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:47.697 21:17:13 -- setup/common.sh@40 -- # local part_no=1 00:02:47.697 21:17:13 -- setup/common.sh@41 -- # local size=1073741824 00:02:47.697 21:17:13 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:47.697 21:17:13 -- setup/common.sh@44 -- # parts=() 00:02:47.697 21:17:13 -- setup/common.sh@44 -- # local parts 00:02:47.697 21:17:13 -- setup/common.sh@46 -- # (( part = 1 )) 00:02:47.697 21:17:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:47.697 21:17:13 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:47.697 21:17:13 -- setup/common.sh@46 -- # (( part++ )) 00:02:47.697 21:17:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:47.697 21:17:13 -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:47.697 21:17:13 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:47.697 21:17:13 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:02:48.634 Creating new GPT entries in memory. 00:02:48.634 GPT data structures destroyed! You may now partition the disk using fdisk or 00:02:48.634 other utilities. 00:02:48.634 21:17:14 -- setup/common.sh@57 -- # (( part = 1 )) 00:02:48.634 21:17:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:48.634 21:17:14 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:48.634 21:17:14 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:48.634 21:17:14 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:02:49.571 Creating new GPT entries in memory. 00:02:49.571 The operation has completed successfully. 00:02:49.571 21:17:15 -- setup/common.sh@57 -- # (( part++ )) 00:02:49.571 21:17:15 -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:49.571 21:17:15 -- setup/common.sh@62 -- # wait 2470489 00:02:49.571 21:17:15 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:49.571 21:17:15 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:02:49.571 21:17:15 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:49.571 21:17:15 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:02:49.571 21:17:15 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:02:49.571 21:17:15 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:49.571 21:17:15 -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:49.571 21:17:15 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:49.571 21:17:15 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:02:49.571 21:17:15 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:49.571 21:17:15 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:49.571 21:17:15 -- setup/devices.sh@53 -- # local found=0 00:02:49.571 21:17:15 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:49.571 21:17:15 -- setup/devices.sh@56 -- # : 00:02:49.571 21:17:15 -- setup/devices.sh@59 -- # local pci status 00:02:49.571 21:17:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.571 21:17:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:49.571 21:17:15 -- setup/devices.sh@47 -- # setup output config 00:02:49.571 21:17:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.571 21:17:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:50.951 21:17:16 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.951 21:17:16 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:02:50.951 21:17:16 -- setup/devices.sh@63 -- # found=1 00:02:50.951 21:17:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.951 21:17:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.951 21:17:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.951 21:17:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.951 21:17:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.951 21:17:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.951 21:17:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.951 21:17:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.951 21:17:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.951 21:17:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.951 21:17:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.951 21:17:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.951 21:17:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.951 21:17:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.951 21:17:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.951 21:17:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.951 21:17:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.951 21:17:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.951 21:17:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.951 21:17:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.951 21:17:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.951 21:17:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.951 21:17:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.951 21:17:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.951 21:17:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.951 21:17:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.951 21:17:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.951 21:17:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.951 21:17:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.951 21:17:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.951 21:17:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.951 21:17:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.951 21:17:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.951 21:17:16 -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:50.951 21:17:16 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:50.951 21:17:16 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:50.951 21:17:16 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:50.951 21:17:16 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:50.951 21:17:16 -- setup/devices.sh@110 -- # cleanup_nvme 00:02:50.951 21:17:16 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:50.951 21:17:16 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:50.951 21:17:16 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:50.951 21:17:16 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:02:50.951 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:50.951 21:17:16 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:50.951 21:17:16 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:51.210 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:02:51.210 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:02:51.210 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:02:51.210 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:02:51.210 21:17:16 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:02:51.210 21:17:16 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:02:51.210 21:17:16 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:51.210 21:17:16 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:02:51.210 21:17:16 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:02:51.210 21:17:16 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:51.210 21:17:16 -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:51.210 21:17:16 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:51.210 21:17:16 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:02:51.210 21:17:16 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:51.210 21:17:16 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:51.210 21:17:16 -- setup/devices.sh@53 -- # local found=0 00:02:51.210 21:17:16 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:51.210 21:17:16 -- setup/devices.sh@56 -- # : 00:02:51.210 21:17:16 -- setup/devices.sh@59 -- # local pci status 00:02:51.210 21:17:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:51.210 21:17:16 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:51.210 21:17:16 -- setup/devices.sh@47 -- # setup output config 00:02:51.210 21:17:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.210 21:17:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:52.145 21:17:17 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.145 21:17:17 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:02:52.145 21:17:17 -- setup/devices.sh@63 -- # found=1 00:02:52.145 21:17:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.145 21:17:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.145 21:17:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.145 21:17:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.145 21:17:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.145 21:17:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.145 21:17:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.145 21:17:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.145 21:17:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.145 21:17:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.145 21:17:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.145 21:17:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.145 21:17:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.145 21:17:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.145 21:17:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.145 21:17:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.145 21:17:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.145 21:17:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.145 21:17:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.145 21:17:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.145 21:17:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.145 21:17:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.145 21:17:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.145 21:17:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.145 21:17:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.145 21:17:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.145 21:17:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.145 21:17:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.145 21:17:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.145 21:17:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.145 21:17:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.145 21:17:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.145 21:17:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.403 21:17:17 -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:52.403 21:17:18 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:52.403 21:17:18 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:52.403 21:17:18 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:52.403 21:17:18 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:52.403 21:17:18 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:52.403 21:17:18 -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:02:52.403 21:17:18 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:52.403 21:17:18 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:02:52.403 21:17:18 -- setup/devices.sh@50 -- # local mount_point= 00:02:52.403 21:17:18 -- setup/devices.sh@51 -- # local test_file= 00:02:52.403 21:17:18 -- setup/devices.sh@53 -- # local found=0 00:02:52.403 21:17:18 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:02:52.403 21:17:18 -- setup/devices.sh@59 -- # local pci status 00:02:52.403 21:17:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.403 21:17:18 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:52.403 21:17:18 -- setup/devices.sh@47 -- # setup output config 00:02:52.403 21:17:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.404 21:17:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:53.778 21:17:19 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.778 21:17:19 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:02:53.778 21:17:19 -- setup/devices.sh@63 -- # found=1 00:02:53.779 21:17:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.779 21:17:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.779 21:17:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.779 21:17:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.779 21:17:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.779 21:17:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.779 21:17:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.779 21:17:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.779 21:17:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.779 21:17:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.779 21:17:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.779 21:17:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.779 21:17:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.779 21:17:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.779 21:17:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.779 21:17:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.779 21:17:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.779 21:17:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.779 21:17:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.779 21:17:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.779 21:17:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.779 21:17:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.779 21:17:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.779 21:17:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.779 21:17:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.779 21:17:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.779 21:17:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.779 21:17:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.779 21:17:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.779 21:17:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.779 21:17:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.779 21:17:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.779 21:17:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.779 21:17:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:53.779 21:17:19 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:02:53.779 21:17:19 -- setup/devices.sh@68 -- # return 0 00:02:53.779 21:17:19 -- setup/devices.sh@128 -- # cleanup_nvme 00:02:53.779 21:17:19 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:53.779 21:17:19 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:53.779 21:17:19 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:53.779 21:17:19 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:53.779 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:53.779 00:02:53.779 real 0m6.324s 00:02:53.779 user 0m1.455s 00:02:53.779 sys 0m2.455s 00:02:53.779 21:17:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:53.779 21:17:19 -- common/autotest_common.sh@10 -- # set +x 00:02:53.779 ************************************ 00:02:53.779 END TEST nvme_mount 00:02:53.779 ************************************ 00:02:53.779 21:17:19 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:02:53.779 21:17:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:53.779 21:17:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:53.779 21:17:19 -- common/autotest_common.sh@10 -- # set +x 00:02:54.037 ************************************ 00:02:54.037 START TEST dm_mount 00:02:54.037 ************************************ 00:02:54.037 21:17:19 -- common/autotest_common.sh@1111 -- # dm_mount 00:02:54.037 21:17:19 -- setup/devices.sh@144 -- # pv=nvme0n1 00:02:54.037 21:17:19 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:02:54.037 21:17:19 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:02:54.037 21:17:19 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:02:54.037 21:17:19 -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:54.038 21:17:19 -- setup/common.sh@40 -- # local part_no=2 00:02:54.038 21:17:19 -- setup/common.sh@41 -- # local size=1073741824 00:02:54.038 21:17:19 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:54.038 21:17:19 -- setup/common.sh@44 -- # parts=() 00:02:54.038 21:17:19 -- setup/common.sh@44 -- # local parts 00:02:54.038 21:17:19 -- setup/common.sh@46 -- # (( part = 1 )) 00:02:54.038 21:17:19 -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:54.038 21:17:19 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:54.038 21:17:19 -- setup/common.sh@46 -- # (( part++ )) 00:02:54.038 21:17:19 -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:54.038 21:17:19 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:54.038 21:17:19 -- setup/common.sh@46 -- # (( part++ )) 00:02:54.038 21:17:19 -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:54.038 21:17:19 -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:54.038 21:17:19 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:54.038 21:17:19 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:02:54.973 Creating new GPT entries in memory. 00:02:54.973 GPT data structures destroyed! You may now partition the disk using fdisk or 00:02:54.973 other utilities. 00:02:54.973 21:17:20 -- setup/common.sh@57 -- # (( part = 1 )) 00:02:54.973 21:17:20 -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:54.973 21:17:20 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:54.973 21:17:20 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:54.973 21:17:20 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:02:55.906 Creating new GPT entries in memory. 00:02:55.906 The operation has completed successfully. 00:02:55.906 21:17:21 -- setup/common.sh@57 -- # (( part++ )) 00:02:55.906 21:17:21 -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:55.906 21:17:21 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:55.906 21:17:21 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:55.906 21:17:21 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:02:56.842 The operation has completed successfully. 00:02:56.842 21:17:22 -- setup/common.sh@57 -- # (( part++ )) 00:02:56.842 21:17:22 -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:56.842 21:17:22 -- setup/common.sh@62 -- # wait 2472882 00:02:57.102 21:17:22 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:02:57.102 21:17:22 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:57.102 21:17:22 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:02:57.102 21:17:22 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:02:57.102 21:17:22 -- setup/devices.sh@160 -- # for t in {1..5} 00:02:57.102 21:17:22 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:02:57.102 21:17:22 -- setup/devices.sh@161 -- # break 00:02:57.102 21:17:22 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:02:57.102 21:17:22 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:02:57.102 21:17:22 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:02:57.102 21:17:22 -- setup/devices.sh@166 -- # dm=dm-0 00:02:57.102 21:17:22 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:02:57.102 21:17:22 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:02:57.102 21:17:22 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:57.102 21:17:22 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:02:57.102 21:17:22 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:57.102 21:17:22 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:02:57.102 21:17:22 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:02:57.102 21:17:22 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:57.102 21:17:22 -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:02:57.102 21:17:22 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:57.102 21:17:22 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:02:57.102 21:17:22 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:57.102 21:17:22 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:02:57.102 21:17:22 -- setup/devices.sh@53 -- # local found=0 00:02:57.102 21:17:22 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:02:57.102 21:17:22 -- setup/devices.sh@56 -- # : 00:02:57.102 21:17:22 -- setup/devices.sh@59 -- # local pci status 00:02:57.102 21:17:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.102 21:17:22 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:57.102 21:17:22 -- setup/devices.sh@47 -- # setup output config 00:02:57.102 21:17:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.102 21:17:22 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:58.039 21:17:23 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.039 21:17:23 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:02:58.039 21:17:23 -- setup/devices.sh@63 -- # found=1 00:02:58.039 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.039 21:17:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.039 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.039 21:17:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.039 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.039 21:17:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.039 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.039 21:17:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.039 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.039 21:17:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.039 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.039 21:17:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.039 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.039 21:17:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.039 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.039 21:17:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.039 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.039 21:17:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.039 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.039 21:17:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.039 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.039 21:17:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.039 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.039 21:17:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.039 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.039 21:17:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.039 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.039 21:17:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.039 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.039 21:17:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.039 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.039 21:17:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.039 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.298 21:17:23 -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:58.298 21:17:23 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:02:58.298 21:17:23 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:58.298 21:17:23 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:02:58.298 21:17:23 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:02:58.298 21:17:23 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:58.298 21:17:23 -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:02:58.298 21:17:23 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:58.298 21:17:23 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:02:58.298 21:17:23 -- setup/devices.sh@50 -- # local mount_point= 00:02:58.298 21:17:23 -- setup/devices.sh@51 -- # local test_file= 00:02:58.298 21:17:23 -- setup/devices.sh@53 -- # local found=0 00:02:58.298 21:17:23 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:02:58.298 21:17:23 -- setup/devices.sh@59 -- # local pci status 00:02:58.298 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.298 21:17:23 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:58.298 21:17:23 -- setup/devices.sh@47 -- # setup output config 00:02:58.298 21:17:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.298 21:17:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:59.293 21:17:24 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.293 21:17:24 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:02:59.293 21:17:24 -- setup/devices.sh@63 -- # found=1 00:02:59.293 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.293 21:17:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.293 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.293 21:17:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.293 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.293 21:17:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.293 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.293 21:17:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.293 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.293 21:17:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.293 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.293 21:17:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.293 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.293 21:17:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.294 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.294 21:17:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.294 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.294 21:17:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.294 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.294 21:17:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.294 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.294 21:17:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.294 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.294 21:17:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.294 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.294 21:17:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.294 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.294 21:17:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.294 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.294 21:17:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.294 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.294 21:17:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.294 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.552 21:17:25 -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:59.552 21:17:25 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:02:59.552 21:17:25 -- setup/devices.sh@68 -- # return 0 00:02:59.552 21:17:25 -- setup/devices.sh@187 -- # cleanup_dm 00:02:59.552 21:17:25 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:59.552 21:17:25 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:02:59.552 21:17:25 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:02:59.552 21:17:25 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:59.553 21:17:25 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:02:59.553 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:59.553 21:17:25 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:02:59.553 21:17:25 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:02:59.553 00:02:59.553 real 0m5.694s 00:02:59.553 user 0m0.933s 00:02:59.553 sys 0m1.605s 00:02:59.553 21:17:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:59.553 21:17:25 -- common/autotest_common.sh@10 -- # set +x 00:02:59.553 ************************************ 00:02:59.553 END TEST dm_mount 00:02:59.553 ************************************ 00:02:59.553 21:17:25 -- setup/devices.sh@1 -- # cleanup 00:02:59.553 21:17:25 -- setup/devices.sh@11 -- # cleanup_nvme 00:02:59.553 21:17:25 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:59.553 21:17:25 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:59.553 21:17:25 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:02:59.553 21:17:25 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:59.553 21:17:25 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:59.811 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:02:59.811 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:02:59.811 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:02:59.811 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:02:59.811 21:17:25 -- setup/devices.sh@12 -- # cleanup_dm 00:02:59.811 21:17:25 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:59.811 21:17:25 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:02:59.811 21:17:25 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:59.811 21:17:25 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:02:59.811 21:17:25 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:02:59.811 21:17:25 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:02:59.811 00:02:59.811 real 0m14.048s 00:02:59.811 user 0m3.081s 00:02:59.811 sys 0m5.142s 00:02:59.811 21:17:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:59.811 21:17:25 -- common/autotest_common.sh@10 -- # set +x 00:02:59.811 ************************************ 00:02:59.811 END TEST devices 00:02:59.811 ************************************ 00:02:59.811 00:02:59.811 real 0m44.426s 00:02:59.811 user 0m12.696s 00:02:59.811 sys 0m19.879s 00:02:59.811 21:17:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:59.811 21:17:25 -- common/autotest_common.sh@10 -- # set +x 00:02:59.811 ************************************ 00:02:59.811 END TEST setup.sh 00:02:59.811 ************************************ 00:03:00.070 21:17:25 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:01.007 Hugepages 00:03:01.007 node hugesize free / total 00:03:01.007 node0 1048576kB 0 / 0 00:03:01.007 node0 2048kB 2048 / 2048 00:03:01.007 node1 1048576kB 0 / 0 00:03:01.007 node1 2048kB 0 / 0 00:03:01.007 00:03:01.007 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:01.007 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:01.007 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:01.007 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:01.007 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:01.007 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:01.007 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:01.007 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:01.007 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:01.007 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:01.007 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:01.007 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:01.007 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:01.007 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:01.007 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:01.007 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:01.007 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:01.265 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:01.265 21:17:26 -- spdk/autotest.sh@130 -- # uname -s 00:03:01.265 21:17:26 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:01.265 21:17:26 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:01.265 21:17:26 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:02.640 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:02.640 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:02.640 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:02.640 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:02.640 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:02.640 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:02.640 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:02.640 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:02.640 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:02.640 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:02.640 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:02.640 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:02.640 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:02.640 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:02.640 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:02.640 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:03.574 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:03.574 21:17:29 -- common/autotest_common.sh@1518 -- # sleep 1 00:03:04.508 21:17:30 -- common/autotest_common.sh@1519 -- # bdfs=() 00:03:04.508 21:17:30 -- common/autotest_common.sh@1519 -- # local bdfs 00:03:04.508 21:17:30 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:04.508 21:17:30 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:04.508 21:17:30 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:04.508 21:17:30 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:04.508 21:17:30 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:04.508 21:17:30 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:04.508 21:17:30 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:04.767 21:17:30 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:04.767 21:17:30 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:03:04.767 21:17:30 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.705 Waiting for block devices as requested 00:03:05.705 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:05.964 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:05.964 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:05.964 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:06.222 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:06.222 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:06.222 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:06.222 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:06.480 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:06.480 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:06.480 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:06.480 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:06.739 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:06.739 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:06.739 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:06.997 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:06.997 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:06.997 21:17:32 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:06.997 21:17:32 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:06.997 21:17:32 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:03:06.997 21:17:32 -- common/autotest_common.sh@1488 -- # grep 0000:88:00.0/nvme/nvme 00:03:06.997 21:17:32 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:06.998 21:17:32 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:06.998 21:17:32 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:06.998 21:17:32 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:03:06.998 21:17:32 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:06.998 21:17:32 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:06.998 21:17:32 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:06.998 21:17:32 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:06.998 21:17:32 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:06.998 21:17:32 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:06.998 21:17:32 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:06.998 21:17:32 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:06.998 21:17:32 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:06.998 21:17:32 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:06.998 21:17:32 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:06.998 21:17:32 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:06.998 21:17:32 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:06.998 21:17:32 -- common/autotest_common.sh@1543 -- # continue 00:03:06.998 21:17:32 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:06.998 21:17:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:06.998 21:17:32 -- common/autotest_common.sh@10 -- # set +x 00:03:06.998 21:17:32 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:06.998 21:17:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:06.998 21:17:32 -- common/autotest_common.sh@10 -- # set +x 00:03:06.998 21:17:32 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:08.374 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:08.374 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:08.374 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:08.374 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:08.374 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:08.374 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:08.374 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:08.374 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:08.374 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:08.374 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:08.374 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:08.374 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:08.374 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:08.374 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:08.374 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:08.374 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:09.308 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:09.308 21:17:34 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:09.308 21:17:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:09.308 21:17:34 -- common/autotest_common.sh@10 -- # set +x 00:03:09.308 21:17:34 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:09.308 21:17:34 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:03:09.567 21:17:34 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:03:09.567 21:17:34 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:09.567 21:17:34 -- common/autotest_common.sh@1563 -- # local bdfs 00:03:09.567 21:17:34 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:03:09.567 21:17:34 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:09.567 21:17:34 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:09.567 21:17:34 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:09.567 21:17:34 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:09.567 21:17:34 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:09.567 21:17:35 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:09.567 21:17:35 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:03:09.567 21:17:35 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:03:09.567 21:17:35 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:09.567 21:17:35 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:09.567 21:17:35 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:09.567 21:17:35 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:09.567 21:17:35 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:88:00.0 00:03:09.567 21:17:35 -- common/autotest_common.sh@1578 -- # [[ -z 0000:88:00.0 ]] 00:03:09.567 21:17:35 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=2478058 00:03:09.567 21:17:35 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:09.567 21:17:35 -- common/autotest_common.sh@1584 -- # waitforlisten 2478058 00:03:09.567 21:17:35 -- common/autotest_common.sh@817 -- # '[' -z 2478058 ']' 00:03:09.567 21:17:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:09.567 21:17:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:09.567 21:17:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:09.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:09.567 21:17:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:09.567 21:17:35 -- common/autotest_common.sh@10 -- # set +x 00:03:09.567 [2024-04-24 21:17:35.103089] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:03:09.567 [2024-04-24 21:17:35.103185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478058 ] 00:03:09.567 EAL: No free 2048 kB hugepages reported on node 1 00:03:09.567 [2024-04-24 21:17:35.165190] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:09.826 [2024-04-24 21:17:35.279622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:10.392 21:17:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:10.392 21:17:36 -- common/autotest_common.sh@850 -- # return 0 00:03:10.392 21:17:36 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:03:10.392 21:17:36 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:03:10.392 21:17:36 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:13.696 nvme0n1 00:03:13.696 21:17:39 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:13.696 [2024-04-24 21:17:39.325826] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:13.696 [2024-04-24 21:17:39.325874] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:13.696 request: 00:03:13.696 { 00:03:13.696 "nvme_ctrlr_name": "nvme0", 00:03:13.696 "password": "test", 00:03:13.696 "method": "bdev_nvme_opal_revert", 00:03:13.696 "req_id": 1 00:03:13.696 } 00:03:13.696 Got JSON-RPC error response 00:03:13.696 response: 00:03:13.696 { 00:03:13.696 "code": -32603, 00:03:13.696 "message": "Internal error" 00:03:13.696 } 00:03:13.696 21:17:39 -- common/autotest_common.sh@1590 -- # true 00:03:13.696 21:17:39 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:03:13.696 21:17:39 -- common/autotest_common.sh@1594 -- # killprocess 2478058 00:03:13.696 21:17:39 -- common/autotest_common.sh@936 -- # '[' -z 2478058 ']' 00:03:13.696 21:17:39 -- common/autotest_common.sh@940 -- # kill -0 2478058 00:03:13.696 21:17:39 -- common/autotest_common.sh@941 -- # uname 00:03:13.696 21:17:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:13.696 21:17:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2478058 00:03:13.696 21:17:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:13.696 21:17:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:13.696 21:17:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2478058' 00:03:13.696 killing process with pid 2478058 00:03:13.696 21:17:39 -- common/autotest_common.sh@955 -- # kill 2478058 00:03:13.696 21:17:39 -- common/autotest_common.sh@960 -- # wait 2478058 00:03:15.593 21:17:41 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:15.593 21:17:41 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:15.593 21:17:41 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:15.593 21:17:41 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:15.593 21:17:41 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:15.593 21:17:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:15.593 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:03:15.593 21:17:41 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:15.593 21:17:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:15.593 21:17:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:15.593 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:03:15.851 ************************************ 00:03:15.851 START TEST env 00:03:15.851 ************************************ 00:03:15.851 21:17:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:15.851 * Looking for test storage... 00:03:15.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:15.851 21:17:41 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:15.851 21:17:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:15.851 21:17:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:15.851 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:03:15.851 ************************************ 00:03:15.851 START TEST env_memory 00:03:15.851 ************************************ 00:03:15.851 21:17:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:15.851 00:03:15.851 00:03:15.851 CUnit - A unit testing framework for C - Version 2.1-3 00:03:15.851 http://cunit.sourceforge.net/ 00:03:15.851 00:03:15.851 00:03:15.851 Suite: memory 00:03:15.851 Test: alloc and free memory map ...[2024-04-24 21:17:41.498397] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:15.851 passed 00:03:15.851 Test: mem map translation ...[2024-04-24 21:17:41.520219] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:15.851 [2024-04-24 21:17:41.520241] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:15.851 [2024-04-24 21:17:41.520305] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:15.851 [2024-04-24 21:17:41.520317] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:16.110 passed 00:03:16.110 Test: mem map registration ...[2024-04-24 21:17:41.565823] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:16.110 [2024-04-24 21:17:41.565850] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:16.110 passed 00:03:16.110 Test: mem map adjacent registrations ...passed 00:03:16.110 00:03:16.110 Run Summary: Type Total Ran Passed Failed Inactive 00:03:16.110 suites 1 1 n/a 0 0 00:03:16.110 tests 4 4 4 0 0 00:03:16.110 asserts 152 152 152 0 n/a 00:03:16.110 00:03:16.110 Elapsed time = 0.151 seconds 00:03:16.110 00:03:16.110 real 0m0.158s 00:03:16.110 user 0m0.150s 00:03:16.110 sys 0m0.007s 00:03:16.110 21:17:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:16.110 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:03:16.110 ************************************ 00:03:16.110 END TEST env_memory 00:03:16.110 ************************************ 00:03:16.110 21:17:41 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:16.110 21:17:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:16.110 21:17:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:16.110 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:03:16.110 ************************************ 00:03:16.110 START TEST env_vtophys 00:03:16.110 ************************************ 00:03:16.110 21:17:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:16.110 EAL: lib.eal log level changed from notice to debug 00:03:16.110 EAL: Detected lcore 0 as core 0 on socket 0 00:03:16.110 EAL: Detected lcore 1 as core 1 on socket 0 00:03:16.110 EAL: Detected lcore 2 as core 2 on socket 0 00:03:16.110 EAL: Detected lcore 3 as core 3 on socket 0 00:03:16.110 EAL: Detected lcore 4 as core 4 on socket 0 00:03:16.110 EAL: Detected lcore 5 as core 5 on socket 0 00:03:16.110 EAL: Detected lcore 6 as core 8 on socket 0 00:03:16.110 EAL: Detected lcore 7 as core 9 on socket 0 00:03:16.110 EAL: Detected lcore 8 as core 10 on socket 0 00:03:16.110 EAL: Detected lcore 9 as core 11 on socket 0 00:03:16.110 EAL: Detected lcore 10 as core 12 on socket 0 00:03:16.110 EAL: Detected lcore 11 as core 13 on socket 0 00:03:16.110 EAL: Detected lcore 12 as core 0 on socket 1 00:03:16.110 EAL: Detected lcore 13 as core 1 on socket 1 00:03:16.110 EAL: Detected lcore 14 as core 2 on socket 1 00:03:16.110 EAL: Detected lcore 15 as core 3 on socket 1 00:03:16.110 EAL: Detected lcore 16 as core 4 on socket 1 00:03:16.110 EAL: Detected lcore 17 as core 5 on socket 1 00:03:16.110 EAL: Detected lcore 18 as core 8 on socket 1 00:03:16.110 EAL: Detected lcore 19 as core 9 on socket 1 00:03:16.110 EAL: Detected lcore 20 as core 10 on socket 1 00:03:16.110 EAL: Detected lcore 21 as core 11 on socket 1 00:03:16.110 EAL: Detected lcore 22 as core 12 on socket 1 00:03:16.110 EAL: Detected lcore 23 as core 13 on socket 1 00:03:16.110 EAL: Detected lcore 24 as core 0 on socket 0 00:03:16.110 EAL: Detected lcore 25 as core 1 on socket 0 00:03:16.110 EAL: Detected lcore 26 as core 2 on socket 0 00:03:16.110 EAL: Detected lcore 27 as core 3 on socket 0 00:03:16.110 EAL: Detected lcore 28 as core 4 on socket 0 00:03:16.110 EAL: Detected lcore 29 as core 5 on socket 0 00:03:16.110 EAL: Detected lcore 30 as core 8 on socket 0 00:03:16.110 EAL: Detected lcore 31 as core 9 on socket 0 00:03:16.110 EAL: Detected lcore 32 as core 10 on socket 0 00:03:16.110 EAL: Detected lcore 33 as core 11 on socket 0 00:03:16.110 EAL: Detected lcore 34 as core 12 on socket 0 00:03:16.110 EAL: Detected lcore 35 as core 13 on socket 0 00:03:16.110 EAL: Detected lcore 36 as core 0 on socket 1 00:03:16.110 EAL: Detected lcore 37 as core 1 on socket 1 00:03:16.110 EAL: Detected lcore 38 as core 2 on socket 1 00:03:16.110 EAL: Detected lcore 39 as core 3 on socket 1 00:03:16.110 EAL: Detected lcore 40 as core 4 on socket 1 00:03:16.110 EAL: Detected lcore 41 as core 5 on socket 1 00:03:16.110 EAL: Detected lcore 42 as core 8 on socket 1 00:03:16.110 EAL: Detected lcore 43 as core 9 on socket 1 00:03:16.110 EAL: Detected lcore 44 as core 10 on socket 1 00:03:16.110 EAL: Detected lcore 45 as core 11 on socket 1 00:03:16.110 EAL: Detected lcore 46 as core 12 on socket 1 00:03:16.110 EAL: Detected lcore 47 as core 13 on socket 1 00:03:16.110 EAL: Maximum logical cores by configuration: 128 00:03:16.110 EAL: Detected CPU lcores: 48 00:03:16.110 EAL: Detected NUMA nodes: 2 00:03:16.110 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:16.110 EAL: Detected shared linkage of DPDK 00:03:16.110 EAL: No shared files mode enabled, IPC will be disabled 00:03:16.110 EAL: Bus pci wants IOVA as 'DC' 00:03:16.110 EAL: Buses did not request a specific IOVA mode. 00:03:16.110 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:16.110 EAL: Selected IOVA mode 'VA' 00:03:16.110 EAL: No free 2048 kB hugepages reported on node 1 00:03:16.110 EAL: Probing VFIO support... 00:03:16.110 EAL: IOMMU type 1 (Type 1) is supported 00:03:16.110 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:16.110 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:16.110 EAL: VFIO support initialized 00:03:16.110 EAL: Ask a virtual area of 0x2e000 bytes 00:03:16.110 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:16.110 EAL: Setting up physically contiguous memory... 00:03:16.110 EAL: Setting maximum number of open files to 524288 00:03:16.110 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:16.110 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:16.110 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:16.110 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.110 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:16.110 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:16.110 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.110 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:16.110 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:16.110 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.110 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:16.110 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:16.110 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.110 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:16.110 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:16.110 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.110 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:16.110 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:16.110 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.110 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:16.110 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:16.110 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.110 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:16.111 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:16.111 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.111 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:16.111 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:16.111 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:16.111 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.111 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:16.111 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:16.111 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.111 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:16.111 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:16.111 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.111 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:16.111 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:16.111 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.111 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:16.111 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:16.111 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.111 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:16.111 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:16.111 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.369 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:16.369 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:16.369 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.369 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:16.369 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:16.369 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.369 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:16.369 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:16.369 EAL: Hugepages will be freed exactly as allocated. 00:03:16.369 EAL: No shared files mode enabled, IPC is disabled 00:03:16.369 EAL: No shared files mode enabled, IPC is disabled 00:03:16.369 EAL: TSC frequency is ~2700000 KHz 00:03:16.369 EAL: Main lcore 0 is ready (tid=7f682ae1ca00;cpuset=[0]) 00:03:16.369 EAL: Trying to obtain current memory policy. 00:03:16.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.369 EAL: Restoring previous memory policy: 0 00:03:16.369 EAL: request: mp_malloc_sync 00:03:16.369 EAL: No shared files mode enabled, IPC is disabled 00:03:16.369 EAL: Heap on socket 0 was expanded by 2MB 00:03:16.369 EAL: No shared files mode enabled, IPC is disabled 00:03:16.369 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:16.369 EAL: Mem event callback 'spdk:(nil)' registered 00:03:16.369 00:03:16.369 00:03:16.369 CUnit - A unit testing framework for C - Version 2.1-3 00:03:16.369 http://cunit.sourceforge.net/ 00:03:16.369 00:03:16.369 00:03:16.369 Suite: components_suite 00:03:16.369 Test: vtophys_malloc_test ...passed 00:03:16.369 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:16.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.369 EAL: Restoring previous memory policy: 4 00:03:16.369 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.369 EAL: request: mp_malloc_sync 00:03:16.369 EAL: No shared files mode enabled, IPC is disabled 00:03:16.369 EAL: Heap on socket 0 was expanded by 4MB 00:03:16.369 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.369 EAL: request: mp_malloc_sync 00:03:16.370 EAL: No shared files mode enabled, IPC is disabled 00:03:16.370 EAL: Heap on socket 0 was shrunk by 4MB 00:03:16.370 EAL: Trying to obtain current memory policy. 00:03:16.370 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.370 EAL: Restoring previous memory policy: 4 00:03:16.370 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.370 EAL: request: mp_malloc_sync 00:03:16.370 EAL: No shared files mode enabled, IPC is disabled 00:03:16.370 EAL: Heap on socket 0 was expanded by 6MB 00:03:16.370 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.370 EAL: request: mp_malloc_sync 00:03:16.370 EAL: No shared files mode enabled, IPC is disabled 00:03:16.370 EAL: Heap on socket 0 was shrunk by 6MB 00:03:16.370 EAL: Trying to obtain current memory policy. 00:03:16.370 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.370 EAL: Restoring previous memory policy: 4 00:03:16.370 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.370 EAL: request: mp_malloc_sync 00:03:16.370 EAL: No shared files mode enabled, IPC is disabled 00:03:16.370 EAL: Heap on socket 0 was expanded by 10MB 00:03:16.370 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.370 EAL: request: mp_malloc_sync 00:03:16.370 EAL: No shared files mode enabled, IPC is disabled 00:03:16.370 EAL: Heap on socket 0 was shrunk by 10MB 00:03:16.370 EAL: Trying to obtain current memory policy. 00:03:16.370 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.370 EAL: Restoring previous memory policy: 4 00:03:16.370 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.370 EAL: request: mp_malloc_sync 00:03:16.370 EAL: No shared files mode enabled, IPC is disabled 00:03:16.370 EAL: Heap on socket 0 was expanded by 18MB 00:03:16.370 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.370 EAL: request: mp_malloc_sync 00:03:16.370 EAL: No shared files mode enabled, IPC is disabled 00:03:16.370 EAL: Heap on socket 0 was shrunk by 18MB 00:03:16.370 EAL: Trying to obtain current memory policy. 00:03:16.370 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.370 EAL: Restoring previous memory policy: 4 00:03:16.370 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.370 EAL: request: mp_malloc_sync 00:03:16.370 EAL: No shared files mode enabled, IPC is disabled 00:03:16.370 EAL: Heap on socket 0 was expanded by 34MB 00:03:16.370 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.370 EAL: request: mp_malloc_sync 00:03:16.370 EAL: No shared files mode enabled, IPC is disabled 00:03:16.370 EAL: Heap on socket 0 was shrunk by 34MB 00:03:16.370 EAL: Trying to obtain current memory policy. 00:03:16.370 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.370 EAL: Restoring previous memory policy: 4 00:03:16.370 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.370 EAL: request: mp_malloc_sync 00:03:16.370 EAL: No shared files mode enabled, IPC is disabled 00:03:16.370 EAL: Heap on socket 0 was expanded by 66MB 00:03:16.370 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.370 EAL: request: mp_malloc_sync 00:03:16.370 EAL: No shared files mode enabled, IPC is disabled 00:03:16.370 EAL: Heap on socket 0 was shrunk by 66MB 00:03:16.370 EAL: Trying to obtain current memory policy. 00:03:16.370 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.370 EAL: Restoring previous memory policy: 4 00:03:16.370 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.370 EAL: request: mp_malloc_sync 00:03:16.370 EAL: No shared files mode enabled, IPC is disabled 00:03:16.370 EAL: Heap on socket 0 was expanded by 130MB 00:03:16.370 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.370 EAL: request: mp_malloc_sync 00:03:16.370 EAL: No shared files mode enabled, IPC is disabled 00:03:16.370 EAL: Heap on socket 0 was shrunk by 130MB 00:03:16.370 EAL: Trying to obtain current memory policy. 00:03:16.370 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.370 EAL: Restoring previous memory policy: 4 00:03:16.370 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.370 EAL: request: mp_malloc_sync 00:03:16.370 EAL: No shared files mode enabled, IPC is disabled 00:03:16.370 EAL: Heap on socket 0 was expanded by 258MB 00:03:16.629 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.629 EAL: request: mp_malloc_sync 00:03:16.629 EAL: No shared files mode enabled, IPC is disabled 00:03:16.629 EAL: Heap on socket 0 was shrunk by 258MB 00:03:16.629 EAL: Trying to obtain current memory policy. 00:03:16.629 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.887 EAL: Restoring previous memory policy: 4 00:03:16.887 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.887 EAL: request: mp_malloc_sync 00:03:16.887 EAL: No shared files mode enabled, IPC is disabled 00:03:16.887 EAL: Heap on socket 0 was expanded by 514MB 00:03:16.887 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.887 EAL: request: mp_malloc_sync 00:03:16.887 EAL: No shared files mode enabled, IPC is disabled 00:03:16.887 EAL: Heap on socket 0 was shrunk by 514MB 00:03:16.887 EAL: Trying to obtain current memory policy. 00:03:16.887 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:17.146 EAL: Restoring previous memory policy: 4 00:03:17.146 EAL: Calling mem event callback 'spdk:(nil)' 00:03:17.146 EAL: request: mp_malloc_sync 00:03:17.146 EAL: No shared files mode enabled, IPC is disabled 00:03:17.146 EAL: Heap on socket 0 was expanded by 1026MB 00:03:17.404 EAL: Calling mem event callback 'spdk:(nil)' 00:03:17.663 EAL: request: mp_malloc_sync 00:03:17.663 EAL: No shared files mode enabled, IPC is disabled 00:03:17.663 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:17.663 passed 00:03:17.663 00:03:17.663 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.663 suites 1 1 n/a 0 0 00:03:17.663 tests 2 2 2 0 0 00:03:17.663 asserts 497 497 497 0 n/a 00:03:17.663 00:03:17.663 Elapsed time = 1.387 seconds 00:03:17.663 EAL: Calling mem event callback 'spdk:(nil)' 00:03:17.663 EAL: request: mp_malloc_sync 00:03:17.663 EAL: No shared files mode enabled, IPC is disabled 00:03:17.663 EAL: Heap on socket 0 was shrunk by 2MB 00:03:17.663 EAL: No shared files mode enabled, IPC is disabled 00:03:17.663 EAL: No shared files mode enabled, IPC is disabled 00:03:17.663 EAL: No shared files mode enabled, IPC is disabled 00:03:17.663 00:03:17.663 real 0m1.512s 00:03:17.663 user 0m0.865s 00:03:17.663 sys 0m0.607s 00:03:17.663 21:17:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:17.663 21:17:43 -- common/autotest_common.sh@10 -- # set +x 00:03:17.663 ************************************ 00:03:17.663 END TEST env_vtophys 00:03:17.663 ************************************ 00:03:17.663 21:17:43 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:17.663 21:17:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:17.663 21:17:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:17.663 21:17:43 -- common/autotest_common.sh@10 -- # set +x 00:03:17.922 ************************************ 00:03:17.922 START TEST env_pci 00:03:17.922 ************************************ 00:03:17.922 21:17:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:17.922 00:03:17.922 00:03:17.922 CUnit - A unit testing framework for C - Version 2.1-3 00:03:17.922 http://cunit.sourceforge.net/ 00:03:17.922 00:03:17.922 00:03:17.922 Suite: pci 00:03:17.922 Test: pci_hook ...[2024-04-24 21:17:43.380329] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2479103 has claimed it 00:03:17.922 EAL: Cannot find device (10000:00:01.0) 00:03:17.922 EAL: Failed to attach device on primary process 00:03:17.922 passed 00:03:17.922 00:03:17.922 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.922 suites 1 1 n/a 0 0 00:03:17.922 tests 1 1 1 0 0 00:03:17.922 asserts 25 25 25 0 n/a 00:03:17.922 00:03:17.922 Elapsed time = 0.022 seconds 00:03:17.922 00:03:17.922 real 0m0.035s 00:03:17.922 user 0m0.011s 00:03:17.922 sys 0m0.024s 00:03:17.922 21:17:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:17.922 21:17:43 -- common/autotest_common.sh@10 -- # set +x 00:03:17.922 ************************************ 00:03:17.922 END TEST env_pci 00:03:17.922 ************************************ 00:03:17.922 21:17:43 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:17.922 21:17:43 -- env/env.sh@15 -- # uname 00:03:17.922 21:17:43 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:17.922 21:17:43 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:17.922 21:17:43 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:17.922 21:17:43 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:03:17.922 21:17:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:17.922 21:17:43 -- common/autotest_common.sh@10 -- # set +x 00:03:17.922 ************************************ 00:03:17.922 START TEST env_dpdk_post_init 00:03:17.922 ************************************ 00:03:17.922 21:17:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:17.922 EAL: Detected CPU lcores: 48 00:03:17.922 EAL: Detected NUMA nodes: 2 00:03:17.922 EAL: Detected shared linkage of DPDK 00:03:17.922 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:17.922 EAL: Selected IOVA mode 'VA' 00:03:17.922 EAL: No free 2048 kB hugepages reported on node 1 00:03:17.922 EAL: VFIO support initialized 00:03:17.922 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:18.180 EAL: Using IOMMU type 1 (Type 1) 00:03:18.180 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:18.180 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:18.180 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:18.180 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:18.180 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:18.180 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:18.180 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:18.180 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:18.180 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:18.180 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:18.180 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:18.180 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:18.180 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:18.180 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:18.180 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:18.180 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:19.117 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:22.401 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:22.401 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:22.401 Starting DPDK initialization... 00:03:22.401 Starting SPDK post initialization... 00:03:22.401 SPDK NVMe probe 00:03:22.401 Attaching to 0000:88:00.0 00:03:22.401 Attached to 0000:88:00.0 00:03:22.401 Cleaning up... 00:03:22.401 00:03:22.401 real 0m4.431s 00:03:22.401 user 0m3.289s 00:03:22.401 sys 0m0.199s 00:03:22.401 21:17:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:22.401 21:17:47 -- common/autotest_common.sh@10 -- # set +x 00:03:22.401 ************************************ 00:03:22.401 END TEST env_dpdk_post_init 00:03:22.401 ************************************ 00:03:22.401 21:17:47 -- env/env.sh@26 -- # uname 00:03:22.401 21:17:47 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:22.401 21:17:47 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:22.401 21:17:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:22.401 21:17:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:22.401 21:17:47 -- common/autotest_common.sh@10 -- # set +x 00:03:22.660 ************************************ 00:03:22.660 START TEST env_mem_callbacks 00:03:22.660 ************************************ 00:03:22.660 21:17:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:22.660 EAL: Detected CPU lcores: 48 00:03:22.660 EAL: Detected NUMA nodes: 2 00:03:22.660 EAL: Detected shared linkage of DPDK 00:03:22.660 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:22.660 EAL: Selected IOVA mode 'VA' 00:03:22.660 EAL: No free 2048 kB hugepages reported on node 1 00:03:22.660 EAL: VFIO support initialized 00:03:22.660 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:22.660 00:03:22.660 00:03:22.660 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.660 http://cunit.sourceforge.net/ 00:03:22.660 00:03:22.660 00:03:22.660 Suite: memory 00:03:22.660 Test: test ... 00:03:22.660 register 0x200000200000 2097152 00:03:22.660 malloc 3145728 00:03:22.660 register 0x200000400000 4194304 00:03:22.660 buf 0x200000500000 len 3145728 PASSED 00:03:22.660 malloc 64 00:03:22.660 buf 0x2000004fff40 len 64 PASSED 00:03:22.660 malloc 4194304 00:03:22.660 register 0x200000800000 6291456 00:03:22.660 buf 0x200000a00000 len 4194304 PASSED 00:03:22.660 free 0x200000500000 3145728 00:03:22.660 free 0x2000004fff40 64 00:03:22.660 unregister 0x200000400000 4194304 PASSED 00:03:22.660 free 0x200000a00000 4194304 00:03:22.660 unregister 0x200000800000 6291456 PASSED 00:03:22.660 malloc 8388608 00:03:22.660 register 0x200000400000 10485760 00:03:22.660 buf 0x200000600000 len 8388608 PASSED 00:03:22.660 free 0x200000600000 8388608 00:03:22.660 unregister 0x200000400000 10485760 PASSED 00:03:22.660 passed 00:03:22.660 00:03:22.660 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.660 suites 1 1 n/a 0 0 00:03:22.660 tests 1 1 1 0 0 00:03:22.660 asserts 15 15 15 0 n/a 00:03:22.660 00:03:22.660 Elapsed time = 0.005 seconds 00:03:22.660 00:03:22.660 real 0m0.047s 00:03:22.660 user 0m0.014s 00:03:22.660 sys 0m0.033s 00:03:22.660 21:17:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:22.660 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:03:22.660 ************************************ 00:03:22.660 END TEST env_mem_callbacks 00:03:22.660 ************************************ 00:03:22.660 00:03:22.660 real 0m6.832s 00:03:22.660 user 0m4.557s 00:03:22.660 sys 0m1.253s 00:03:22.660 21:17:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:22.660 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:03:22.660 ************************************ 00:03:22.660 END TEST env 00:03:22.660 ************************************ 00:03:22.660 21:17:48 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:22.660 21:17:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:22.660 21:17:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:22.660 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:03:22.660 ************************************ 00:03:22.660 START TEST rpc 00:03:22.660 ************************************ 00:03:22.660 21:17:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:22.660 * Looking for test storage... 00:03:22.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:22.660 21:17:48 -- rpc/rpc.sh@65 -- # spdk_pid=2479787 00:03:22.660 21:17:48 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:22.660 21:17:48 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:22.660 21:17:48 -- rpc/rpc.sh@67 -- # waitforlisten 2479787 00:03:22.660 21:17:48 -- common/autotest_common.sh@817 -- # '[' -z 2479787 ']' 00:03:22.660 21:17:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:22.660 21:17:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:22.660 21:17:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:22.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:22.660 21:17:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:22.660 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:03:22.919 [2024-04-24 21:17:48.372709] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:03:22.919 [2024-04-24 21:17:48.372796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2479787 ] 00:03:22.919 EAL: No free 2048 kB hugepages reported on node 1 00:03:22.919 [2024-04-24 21:17:48.433668] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:22.919 [2024-04-24 21:17:48.538535] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:22.919 [2024-04-24 21:17:48.538590] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2479787' to capture a snapshot of events at runtime. 00:03:22.919 [2024-04-24 21:17:48.538617] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:22.919 [2024-04-24 21:17:48.538635] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:22.919 [2024-04-24 21:17:48.538646] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2479787 for offline analysis/debug. 00:03:22.919 [2024-04-24 21:17:48.538689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:23.178 21:17:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:23.178 21:17:48 -- common/autotest_common.sh@850 -- # return 0 00:03:23.178 21:17:48 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:23.178 21:17:48 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:23.178 21:17:48 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:23.178 21:17:48 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:23.178 21:17:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:23.178 21:17:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:23.178 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:03:23.437 ************************************ 00:03:23.437 START TEST rpc_integrity 00:03:23.437 ************************************ 00:03:23.437 21:17:48 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:03:23.437 21:17:48 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:23.437 21:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:23.437 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:03:23.437 21:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:23.437 21:17:48 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:23.437 21:17:48 -- rpc/rpc.sh@13 -- # jq length 00:03:23.437 21:17:48 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:23.437 21:17:48 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:23.437 21:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:23.437 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:03:23.437 21:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:23.437 21:17:48 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:23.437 21:17:48 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:23.437 21:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:23.437 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:03:23.437 21:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:23.437 21:17:48 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:23.437 { 00:03:23.437 "name": "Malloc0", 00:03:23.437 "aliases": [ 00:03:23.437 "92d4d88c-13c8-458a-acb2-cf923c5bd701" 00:03:23.437 ], 00:03:23.437 "product_name": "Malloc disk", 00:03:23.437 "block_size": 512, 00:03:23.437 "num_blocks": 16384, 00:03:23.437 "uuid": "92d4d88c-13c8-458a-acb2-cf923c5bd701", 00:03:23.437 "assigned_rate_limits": { 00:03:23.437 "rw_ios_per_sec": 0, 00:03:23.437 "rw_mbytes_per_sec": 0, 00:03:23.437 "r_mbytes_per_sec": 0, 00:03:23.437 "w_mbytes_per_sec": 0 00:03:23.437 }, 00:03:23.437 "claimed": false, 00:03:23.437 "zoned": false, 00:03:23.437 "supported_io_types": { 00:03:23.437 "read": true, 00:03:23.437 "write": true, 00:03:23.437 "unmap": true, 00:03:23.437 "write_zeroes": true, 00:03:23.437 "flush": true, 00:03:23.437 "reset": true, 00:03:23.437 "compare": false, 00:03:23.437 "compare_and_write": false, 00:03:23.437 "abort": true, 00:03:23.437 "nvme_admin": false, 00:03:23.437 "nvme_io": false 00:03:23.437 }, 00:03:23.437 "memory_domains": [ 00:03:23.437 { 00:03:23.437 "dma_device_id": "system", 00:03:23.437 "dma_device_type": 1 00:03:23.437 }, 00:03:23.437 { 00:03:23.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:23.437 "dma_device_type": 2 00:03:23.437 } 00:03:23.437 ], 00:03:23.437 "driver_specific": {} 00:03:23.437 } 00:03:23.437 ]' 00:03:23.437 21:17:48 -- rpc/rpc.sh@17 -- # jq length 00:03:23.437 21:17:48 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:23.437 21:17:48 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:23.437 21:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:23.437 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:03:23.437 [2024-04-24 21:17:49.002005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:23.437 [2024-04-24 21:17:49.002049] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:23.437 [2024-04-24 21:17:49.002072] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1135d40 00:03:23.437 [2024-04-24 21:17:49.002087] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:23.437 [2024-04-24 21:17:49.003596] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:23.437 [2024-04-24 21:17:49.003623] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:23.437 Passthru0 00:03:23.437 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:23.437 21:17:49 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:23.437 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:23.437 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:23.437 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:23.437 21:17:49 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:23.437 { 00:03:23.437 "name": "Malloc0", 00:03:23.437 "aliases": [ 00:03:23.437 "92d4d88c-13c8-458a-acb2-cf923c5bd701" 00:03:23.437 ], 00:03:23.437 "product_name": "Malloc disk", 00:03:23.437 "block_size": 512, 00:03:23.437 "num_blocks": 16384, 00:03:23.437 "uuid": "92d4d88c-13c8-458a-acb2-cf923c5bd701", 00:03:23.437 "assigned_rate_limits": { 00:03:23.437 "rw_ios_per_sec": 0, 00:03:23.437 "rw_mbytes_per_sec": 0, 00:03:23.437 "r_mbytes_per_sec": 0, 00:03:23.437 "w_mbytes_per_sec": 0 00:03:23.437 }, 00:03:23.437 "claimed": true, 00:03:23.437 "claim_type": "exclusive_write", 00:03:23.437 "zoned": false, 00:03:23.437 "supported_io_types": { 00:03:23.437 "read": true, 00:03:23.437 "write": true, 00:03:23.437 "unmap": true, 00:03:23.437 "write_zeroes": true, 00:03:23.437 "flush": true, 00:03:23.437 "reset": true, 00:03:23.437 "compare": false, 00:03:23.437 "compare_and_write": false, 00:03:23.437 "abort": true, 00:03:23.437 "nvme_admin": false, 00:03:23.437 "nvme_io": false 00:03:23.437 }, 00:03:23.437 "memory_domains": [ 00:03:23.437 { 00:03:23.437 "dma_device_id": "system", 00:03:23.437 "dma_device_type": 1 00:03:23.437 }, 00:03:23.437 { 00:03:23.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:23.437 "dma_device_type": 2 00:03:23.437 } 00:03:23.437 ], 00:03:23.437 "driver_specific": {} 00:03:23.437 }, 00:03:23.437 { 00:03:23.437 "name": "Passthru0", 00:03:23.437 "aliases": [ 00:03:23.437 "8ff49f7b-5ea5-546b-9d7b-0f5b9d06e1bd" 00:03:23.437 ], 00:03:23.437 "product_name": "passthru", 00:03:23.437 "block_size": 512, 00:03:23.437 "num_blocks": 16384, 00:03:23.437 "uuid": "8ff49f7b-5ea5-546b-9d7b-0f5b9d06e1bd", 00:03:23.437 "assigned_rate_limits": { 00:03:23.437 "rw_ios_per_sec": 0, 00:03:23.437 "rw_mbytes_per_sec": 0, 00:03:23.437 "r_mbytes_per_sec": 0, 00:03:23.437 "w_mbytes_per_sec": 0 00:03:23.437 }, 00:03:23.437 "claimed": false, 00:03:23.437 "zoned": false, 00:03:23.437 "supported_io_types": { 00:03:23.437 "read": true, 00:03:23.437 "write": true, 00:03:23.437 "unmap": true, 00:03:23.437 "write_zeroes": true, 00:03:23.437 "flush": true, 00:03:23.437 "reset": true, 00:03:23.437 "compare": false, 00:03:23.437 "compare_and_write": false, 00:03:23.437 "abort": true, 00:03:23.437 "nvme_admin": false, 00:03:23.437 "nvme_io": false 00:03:23.437 }, 00:03:23.437 "memory_domains": [ 00:03:23.437 { 00:03:23.437 "dma_device_id": "system", 00:03:23.437 "dma_device_type": 1 00:03:23.437 }, 00:03:23.437 { 00:03:23.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:23.437 "dma_device_type": 2 00:03:23.437 } 00:03:23.437 ], 00:03:23.437 "driver_specific": { 00:03:23.437 "passthru": { 00:03:23.437 "name": "Passthru0", 00:03:23.437 "base_bdev_name": "Malloc0" 00:03:23.437 } 00:03:23.437 } 00:03:23.437 } 00:03:23.437 ]' 00:03:23.437 21:17:49 -- rpc/rpc.sh@21 -- # jq length 00:03:23.437 21:17:49 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:23.437 21:17:49 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:23.437 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:23.437 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:23.437 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:23.437 21:17:49 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:23.437 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:23.437 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:23.437 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:23.437 21:17:49 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:23.437 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:23.437 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:23.437 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:23.437 21:17:49 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:23.437 21:17:49 -- rpc/rpc.sh@26 -- # jq length 00:03:23.696 21:17:49 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:23.696 00:03:23.696 real 0m0.222s 00:03:23.696 user 0m0.140s 00:03:23.696 sys 0m0.028s 00:03:23.696 21:17:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:23.696 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:23.696 ************************************ 00:03:23.696 END TEST rpc_integrity 00:03:23.696 ************************************ 00:03:23.696 21:17:49 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:23.696 21:17:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:23.696 21:17:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:23.696 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:23.696 ************************************ 00:03:23.696 START TEST rpc_plugins 00:03:23.696 ************************************ 00:03:23.696 21:17:49 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:03:23.696 21:17:49 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:23.696 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:23.696 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:23.696 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:23.696 21:17:49 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:23.696 21:17:49 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:23.696 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:23.696 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:23.696 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:23.696 21:17:49 -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:23.696 { 00:03:23.696 "name": "Malloc1", 00:03:23.696 "aliases": [ 00:03:23.696 "0353460f-65f6-469e-974d-a75bb7a8df1a" 00:03:23.696 ], 00:03:23.696 "product_name": "Malloc disk", 00:03:23.696 "block_size": 4096, 00:03:23.696 "num_blocks": 256, 00:03:23.696 "uuid": "0353460f-65f6-469e-974d-a75bb7a8df1a", 00:03:23.696 "assigned_rate_limits": { 00:03:23.696 "rw_ios_per_sec": 0, 00:03:23.696 "rw_mbytes_per_sec": 0, 00:03:23.696 "r_mbytes_per_sec": 0, 00:03:23.696 "w_mbytes_per_sec": 0 00:03:23.696 }, 00:03:23.696 "claimed": false, 00:03:23.696 "zoned": false, 00:03:23.696 "supported_io_types": { 00:03:23.696 "read": true, 00:03:23.696 "write": true, 00:03:23.696 "unmap": true, 00:03:23.696 "write_zeroes": true, 00:03:23.696 "flush": true, 00:03:23.696 "reset": true, 00:03:23.696 "compare": false, 00:03:23.696 "compare_and_write": false, 00:03:23.696 "abort": true, 00:03:23.696 "nvme_admin": false, 00:03:23.696 "nvme_io": false 00:03:23.696 }, 00:03:23.696 "memory_domains": [ 00:03:23.696 { 00:03:23.696 "dma_device_id": "system", 00:03:23.696 "dma_device_type": 1 00:03:23.696 }, 00:03:23.696 { 00:03:23.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:23.696 "dma_device_type": 2 00:03:23.696 } 00:03:23.696 ], 00:03:23.696 "driver_specific": {} 00:03:23.696 } 00:03:23.696 ]' 00:03:23.696 21:17:49 -- rpc/rpc.sh@32 -- # jq length 00:03:23.696 21:17:49 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:23.696 21:17:49 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:23.696 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:23.696 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:23.696 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:23.696 21:17:49 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:23.696 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:23.696 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:23.696 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:23.696 21:17:49 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:23.696 21:17:49 -- rpc/rpc.sh@36 -- # jq length 00:03:23.696 21:17:49 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:23.696 00:03:23.696 real 0m0.113s 00:03:23.696 user 0m0.078s 00:03:23.696 sys 0m0.009s 00:03:23.696 21:17:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:23.696 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:23.696 ************************************ 00:03:23.696 END TEST rpc_plugins 00:03:23.696 ************************************ 00:03:23.955 21:17:49 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:23.955 21:17:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:23.955 21:17:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:23.955 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:23.955 ************************************ 00:03:23.955 START TEST rpc_trace_cmd_test 00:03:23.955 ************************************ 00:03:23.955 21:17:49 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:03:23.955 21:17:49 -- rpc/rpc.sh@40 -- # local info 00:03:23.955 21:17:49 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:23.955 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:23.955 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:23.955 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:23.955 21:17:49 -- rpc/rpc.sh@42 -- # info='{ 00:03:23.955 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2479787", 00:03:23.955 "tpoint_group_mask": "0x8", 00:03:23.955 "iscsi_conn": { 00:03:23.955 "mask": "0x2", 00:03:23.955 "tpoint_mask": "0x0" 00:03:23.955 }, 00:03:23.955 "scsi": { 00:03:23.955 "mask": "0x4", 00:03:23.955 "tpoint_mask": "0x0" 00:03:23.955 }, 00:03:23.955 "bdev": { 00:03:23.955 "mask": "0x8", 00:03:23.955 "tpoint_mask": "0xffffffffffffffff" 00:03:23.955 }, 00:03:23.955 "nvmf_rdma": { 00:03:23.955 "mask": "0x10", 00:03:23.955 "tpoint_mask": "0x0" 00:03:23.955 }, 00:03:23.955 "nvmf_tcp": { 00:03:23.955 "mask": "0x20", 00:03:23.955 "tpoint_mask": "0x0" 00:03:23.955 }, 00:03:23.955 "ftl": { 00:03:23.955 "mask": "0x40", 00:03:23.955 "tpoint_mask": "0x0" 00:03:23.955 }, 00:03:23.955 "blobfs": { 00:03:23.955 "mask": "0x80", 00:03:23.955 "tpoint_mask": "0x0" 00:03:23.955 }, 00:03:23.955 "dsa": { 00:03:23.955 "mask": "0x200", 00:03:23.955 "tpoint_mask": "0x0" 00:03:23.955 }, 00:03:23.955 "thread": { 00:03:23.955 "mask": "0x400", 00:03:23.955 "tpoint_mask": "0x0" 00:03:23.955 }, 00:03:23.955 "nvme_pcie": { 00:03:23.955 "mask": "0x800", 00:03:23.955 "tpoint_mask": "0x0" 00:03:23.955 }, 00:03:23.955 "iaa": { 00:03:23.955 "mask": "0x1000", 00:03:23.955 "tpoint_mask": "0x0" 00:03:23.955 }, 00:03:23.955 "nvme_tcp": { 00:03:23.955 "mask": "0x2000", 00:03:23.955 "tpoint_mask": "0x0" 00:03:23.955 }, 00:03:23.955 "bdev_nvme": { 00:03:23.955 "mask": "0x4000", 00:03:23.955 "tpoint_mask": "0x0" 00:03:23.955 }, 00:03:23.955 "sock": { 00:03:23.955 "mask": "0x8000", 00:03:23.955 "tpoint_mask": "0x0" 00:03:23.955 } 00:03:23.955 }' 00:03:23.955 21:17:49 -- rpc/rpc.sh@43 -- # jq length 00:03:23.955 21:17:49 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:23.955 21:17:49 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:23.955 21:17:49 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:23.955 21:17:49 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:23.955 21:17:49 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:23.955 21:17:49 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:24.212 21:17:49 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:24.212 21:17:49 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:24.212 21:17:49 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:24.212 00:03:24.212 real 0m0.199s 00:03:24.212 user 0m0.174s 00:03:24.212 sys 0m0.017s 00:03:24.212 21:17:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:24.212 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:24.212 ************************************ 00:03:24.212 END TEST rpc_trace_cmd_test 00:03:24.212 ************************************ 00:03:24.212 21:17:49 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:24.212 21:17:49 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:24.212 21:17:49 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:24.212 21:17:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:24.212 21:17:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:24.212 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:24.212 ************************************ 00:03:24.212 START TEST rpc_daemon_integrity 00:03:24.212 ************************************ 00:03:24.212 21:17:49 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:03:24.212 21:17:49 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:24.212 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:24.212 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:24.212 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:24.212 21:17:49 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:24.212 21:17:49 -- rpc/rpc.sh@13 -- # jq length 00:03:24.212 21:17:49 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:24.212 21:17:49 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:24.212 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:24.212 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:24.212 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:24.212 21:17:49 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:24.212 21:17:49 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:24.212 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:24.212 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:24.213 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:24.213 21:17:49 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:24.213 { 00:03:24.213 "name": "Malloc2", 00:03:24.213 "aliases": [ 00:03:24.213 "c473b4fd-38a3-4f7b-8fde-f93cbd1187e6" 00:03:24.213 ], 00:03:24.213 "product_name": "Malloc disk", 00:03:24.213 "block_size": 512, 00:03:24.213 "num_blocks": 16384, 00:03:24.213 "uuid": "c473b4fd-38a3-4f7b-8fde-f93cbd1187e6", 00:03:24.213 "assigned_rate_limits": { 00:03:24.213 "rw_ios_per_sec": 0, 00:03:24.213 "rw_mbytes_per_sec": 0, 00:03:24.213 "r_mbytes_per_sec": 0, 00:03:24.213 "w_mbytes_per_sec": 0 00:03:24.213 }, 00:03:24.213 "claimed": false, 00:03:24.213 "zoned": false, 00:03:24.213 "supported_io_types": { 00:03:24.213 "read": true, 00:03:24.213 "write": true, 00:03:24.213 "unmap": true, 00:03:24.213 "write_zeroes": true, 00:03:24.213 "flush": true, 00:03:24.213 "reset": true, 00:03:24.213 "compare": false, 00:03:24.213 "compare_and_write": false, 00:03:24.213 "abort": true, 00:03:24.213 "nvme_admin": false, 00:03:24.213 "nvme_io": false 00:03:24.213 }, 00:03:24.213 "memory_domains": [ 00:03:24.213 { 00:03:24.213 "dma_device_id": "system", 00:03:24.213 "dma_device_type": 1 00:03:24.213 }, 00:03:24.213 { 00:03:24.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:24.213 "dma_device_type": 2 00:03:24.213 } 00:03:24.213 ], 00:03:24.213 "driver_specific": {} 00:03:24.213 } 00:03:24.213 ]' 00:03:24.213 21:17:49 -- rpc/rpc.sh@17 -- # jq length 00:03:24.471 21:17:49 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:24.471 21:17:49 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:24.471 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:24.471 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:24.471 [2024-04-24 21:17:49.913002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:24.471 [2024-04-24 21:17:49.913046] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:24.471 [2024-04-24 21:17:49.913075] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12cd2f0 00:03:24.471 [2024-04-24 21:17:49.913092] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:24.471 [2024-04-24 21:17:49.914449] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:24.471 [2024-04-24 21:17:49.914477] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:24.471 Passthru0 00:03:24.471 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:24.471 21:17:49 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:24.471 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:24.471 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:24.471 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:24.471 21:17:49 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:24.471 { 00:03:24.471 "name": "Malloc2", 00:03:24.471 "aliases": [ 00:03:24.471 "c473b4fd-38a3-4f7b-8fde-f93cbd1187e6" 00:03:24.471 ], 00:03:24.471 "product_name": "Malloc disk", 00:03:24.471 "block_size": 512, 00:03:24.471 "num_blocks": 16384, 00:03:24.471 "uuid": "c473b4fd-38a3-4f7b-8fde-f93cbd1187e6", 00:03:24.471 "assigned_rate_limits": { 00:03:24.471 "rw_ios_per_sec": 0, 00:03:24.471 "rw_mbytes_per_sec": 0, 00:03:24.471 "r_mbytes_per_sec": 0, 00:03:24.471 "w_mbytes_per_sec": 0 00:03:24.471 }, 00:03:24.471 "claimed": true, 00:03:24.471 "claim_type": "exclusive_write", 00:03:24.471 "zoned": false, 00:03:24.471 "supported_io_types": { 00:03:24.471 "read": true, 00:03:24.471 "write": true, 00:03:24.471 "unmap": true, 00:03:24.471 "write_zeroes": true, 00:03:24.471 "flush": true, 00:03:24.471 "reset": true, 00:03:24.471 "compare": false, 00:03:24.471 "compare_and_write": false, 00:03:24.471 "abort": true, 00:03:24.471 "nvme_admin": false, 00:03:24.471 "nvme_io": false 00:03:24.471 }, 00:03:24.471 "memory_domains": [ 00:03:24.471 { 00:03:24.471 "dma_device_id": "system", 00:03:24.471 "dma_device_type": 1 00:03:24.471 }, 00:03:24.471 { 00:03:24.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:24.471 "dma_device_type": 2 00:03:24.471 } 00:03:24.471 ], 00:03:24.471 "driver_specific": {} 00:03:24.471 }, 00:03:24.471 { 00:03:24.471 "name": "Passthru0", 00:03:24.471 "aliases": [ 00:03:24.471 "c384ba59-aa3f-5216-8625-3a638a247b79" 00:03:24.471 ], 00:03:24.471 "product_name": "passthru", 00:03:24.471 "block_size": 512, 00:03:24.471 "num_blocks": 16384, 00:03:24.471 "uuid": "c384ba59-aa3f-5216-8625-3a638a247b79", 00:03:24.471 "assigned_rate_limits": { 00:03:24.471 "rw_ios_per_sec": 0, 00:03:24.471 "rw_mbytes_per_sec": 0, 00:03:24.471 "r_mbytes_per_sec": 0, 00:03:24.471 "w_mbytes_per_sec": 0 00:03:24.471 }, 00:03:24.471 "claimed": false, 00:03:24.471 "zoned": false, 00:03:24.471 "supported_io_types": { 00:03:24.471 "read": true, 00:03:24.471 "write": true, 00:03:24.471 "unmap": true, 00:03:24.471 "write_zeroes": true, 00:03:24.471 "flush": true, 00:03:24.471 "reset": true, 00:03:24.471 "compare": false, 00:03:24.471 "compare_and_write": false, 00:03:24.471 "abort": true, 00:03:24.471 "nvme_admin": false, 00:03:24.471 "nvme_io": false 00:03:24.471 }, 00:03:24.471 "memory_domains": [ 00:03:24.471 { 00:03:24.471 "dma_device_id": "system", 00:03:24.471 "dma_device_type": 1 00:03:24.471 }, 00:03:24.471 { 00:03:24.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:24.471 "dma_device_type": 2 00:03:24.471 } 00:03:24.471 ], 00:03:24.471 "driver_specific": { 00:03:24.471 "passthru": { 00:03:24.471 "name": "Passthru0", 00:03:24.471 "base_bdev_name": "Malloc2" 00:03:24.471 } 00:03:24.471 } 00:03:24.471 } 00:03:24.471 ]' 00:03:24.471 21:17:49 -- rpc/rpc.sh@21 -- # jq length 00:03:24.471 21:17:49 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:24.471 21:17:49 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:24.471 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:24.471 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:24.471 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:24.471 21:17:49 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:24.471 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:24.471 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:24.471 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:24.471 21:17:49 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:24.471 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:24.471 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:24.471 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:24.471 21:17:49 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:24.471 21:17:49 -- rpc/rpc.sh@26 -- # jq length 00:03:24.471 21:17:50 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:24.471 00:03:24.471 real 0m0.239s 00:03:24.471 user 0m0.157s 00:03:24.471 sys 0m0.026s 00:03:24.471 21:17:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:24.471 21:17:50 -- common/autotest_common.sh@10 -- # set +x 00:03:24.471 ************************************ 00:03:24.471 END TEST rpc_daemon_integrity 00:03:24.471 ************************************ 00:03:24.471 21:17:50 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:24.471 21:17:50 -- rpc/rpc.sh@84 -- # killprocess 2479787 00:03:24.471 21:17:50 -- common/autotest_common.sh@936 -- # '[' -z 2479787 ']' 00:03:24.471 21:17:50 -- common/autotest_common.sh@940 -- # kill -0 2479787 00:03:24.471 21:17:50 -- common/autotest_common.sh@941 -- # uname 00:03:24.471 21:17:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:24.471 21:17:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2479787 00:03:24.471 21:17:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:24.471 21:17:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:24.471 21:17:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2479787' 00:03:24.471 killing process with pid 2479787 00:03:24.471 21:17:50 -- common/autotest_common.sh@955 -- # kill 2479787 00:03:24.471 21:17:50 -- common/autotest_common.sh@960 -- # wait 2479787 00:03:25.038 00:03:25.038 real 0m2.284s 00:03:25.038 user 0m2.847s 00:03:25.038 sys 0m0.770s 00:03:25.038 21:17:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:25.038 21:17:50 -- common/autotest_common.sh@10 -- # set +x 00:03:25.038 ************************************ 00:03:25.038 END TEST rpc 00:03:25.038 ************************************ 00:03:25.038 21:17:50 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:25.038 21:17:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:25.038 21:17:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:25.038 21:17:50 -- common/autotest_common.sh@10 -- # set +x 00:03:25.038 ************************************ 00:03:25.038 START TEST skip_rpc 00:03:25.038 ************************************ 00:03:25.038 21:17:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:25.296 * Looking for test storage... 00:03:25.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:25.296 21:17:50 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:25.296 21:17:50 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:25.296 21:17:50 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:25.296 21:17:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:25.296 21:17:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:25.296 21:17:50 -- common/autotest_common.sh@10 -- # set +x 00:03:25.296 ************************************ 00:03:25.296 START TEST skip_rpc 00:03:25.296 ************************************ 00:03:25.296 21:17:50 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:03:25.296 21:17:50 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2480391 00:03:25.296 21:17:50 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:25.296 21:17:50 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:25.296 21:17:50 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:25.296 [2024-04-24 21:17:50.869233] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:03:25.296 [2024-04-24 21:17:50.869309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2480391 ] 00:03:25.296 EAL: No free 2048 kB hugepages reported on node 1 00:03:25.296 [2024-04-24 21:17:50.928637] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:25.555 [2024-04-24 21:17:51.045150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:30.830 21:17:55 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:30.830 21:17:55 -- common/autotest_common.sh@638 -- # local es=0 00:03:30.830 21:17:55 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:30.830 21:17:55 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:03:30.830 21:17:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:30.830 21:17:55 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:03:30.830 21:17:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:30.830 21:17:55 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:03:30.830 21:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:30.830 21:17:55 -- common/autotest_common.sh@10 -- # set +x 00:03:30.830 21:17:55 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:03:30.830 21:17:55 -- common/autotest_common.sh@641 -- # es=1 00:03:30.830 21:17:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:03:30.830 21:17:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:03:30.830 21:17:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:03:30.830 21:17:55 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:30.830 21:17:55 -- rpc/skip_rpc.sh@23 -- # killprocess 2480391 00:03:30.830 21:17:55 -- common/autotest_common.sh@936 -- # '[' -z 2480391 ']' 00:03:30.830 21:17:55 -- common/autotest_common.sh@940 -- # kill -0 2480391 00:03:30.830 21:17:55 -- common/autotest_common.sh@941 -- # uname 00:03:30.830 21:17:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:30.830 21:17:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2480391 00:03:30.830 21:17:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:30.830 21:17:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:30.830 21:17:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2480391' 00:03:30.830 killing process with pid 2480391 00:03:30.830 21:17:55 -- common/autotest_common.sh@955 -- # kill 2480391 00:03:30.830 21:17:55 -- common/autotest_common.sh@960 -- # wait 2480391 00:03:30.830 00:03:30.830 real 0m5.497s 00:03:30.830 user 0m5.183s 00:03:30.830 sys 0m0.318s 00:03:30.830 21:17:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:30.830 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:03:30.830 ************************************ 00:03:30.830 END TEST skip_rpc 00:03:30.830 ************************************ 00:03:30.830 21:17:56 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:30.830 21:17:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:30.830 21:17:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:30.830 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:03:30.830 ************************************ 00:03:30.830 START TEST skip_rpc_with_json 00:03:30.830 ************************************ 00:03:30.830 21:17:56 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:03:30.830 21:17:56 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:30.830 21:17:56 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2481084 00:03:30.830 21:17:56 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:30.830 21:17:56 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:30.830 21:17:56 -- rpc/skip_rpc.sh@31 -- # waitforlisten 2481084 00:03:30.830 21:17:56 -- common/autotest_common.sh@817 -- # '[' -z 2481084 ']' 00:03:30.830 21:17:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:30.830 21:17:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:30.830 21:17:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:30.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:30.830 21:17:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:30.830 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:03:30.830 [2024-04-24 21:17:56.484584] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:03:30.830 [2024-04-24 21:17:56.484716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2481084 ] 00:03:31.089 EAL: No free 2048 kB hugepages reported on node 1 00:03:31.089 [2024-04-24 21:17:56.544155] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:31.089 [2024-04-24 21:17:56.650268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:31.348 21:17:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:31.348 21:17:56 -- common/autotest_common.sh@850 -- # return 0 00:03:31.348 21:17:56 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:31.348 21:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:31.348 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:03:31.348 [2024-04-24 21:17:56.916471] nvmf_rpc.c:2509:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:31.348 request: 00:03:31.348 { 00:03:31.348 "trtype": "tcp", 00:03:31.348 "method": "nvmf_get_transports", 00:03:31.348 "req_id": 1 00:03:31.348 } 00:03:31.348 Got JSON-RPC error response 00:03:31.348 response: 00:03:31.348 { 00:03:31.348 "code": -19, 00:03:31.348 "message": "No such device" 00:03:31.348 } 00:03:31.348 21:17:56 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:03:31.348 21:17:56 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:31.348 21:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:31.348 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:03:31.348 [2024-04-24 21:17:56.924589] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:31.348 21:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:31.348 21:17:56 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:31.348 21:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:31.348 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:03:31.607 21:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:31.607 21:17:57 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:31.607 { 00:03:31.607 "subsystems": [ 00:03:31.607 { 00:03:31.607 "subsystem": "vfio_user_target", 00:03:31.607 "config": null 00:03:31.607 }, 00:03:31.607 { 00:03:31.607 "subsystem": "keyring", 00:03:31.607 "config": [] 00:03:31.607 }, 00:03:31.607 { 00:03:31.607 "subsystem": "iobuf", 00:03:31.607 "config": [ 00:03:31.607 { 00:03:31.607 "method": "iobuf_set_options", 00:03:31.607 "params": { 00:03:31.607 "small_pool_count": 8192, 00:03:31.607 "large_pool_count": 1024, 00:03:31.607 "small_bufsize": 8192, 00:03:31.607 "large_bufsize": 135168 00:03:31.607 } 00:03:31.607 } 00:03:31.607 ] 00:03:31.607 }, 00:03:31.607 { 00:03:31.607 "subsystem": "sock", 00:03:31.607 "config": [ 00:03:31.607 { 00:03:31.607 "method": "sock_impl_set_options", 00:03:31.607 "params": { 00:03:31.607 "impl_name": "posix", 00:03:31.607 "recv_buf_size": 2097152, 00:03:31.607 "send_buf_size": 2097152, 00:03:31.607 "enable_recv_pipe": true, 00:03:31.607 "enable_quickack": false, 00:03:31.607 "enable_placement_id": 0, 00:03:31.607 "enable_zerocopy_send_server": true, 00:03:31.607 "enable_zerocopy_send_client": false, 00:03:31.607 "zerocopy_threshold": 0, 00:03:31.607 "tls_version": 0, 00:03:31.607 "enable_ktls": false 00:03:31.607 } 00:03:31.607 }, 00:03:31.607 { 00:03:31.607 "method": "sock_impl_set_options", 00:03:31.607 "params": { 00:03:31.607 "impl_name": "ssl", 00:03:31.607 "recv_buf_size": 4096, 00:03:31.607 "send_buf_size": 4096, 00:03:31.607 "enable_recv_pipe": true, 00:03:31.607 "enable_quickack": false, 00:03:31.607 "enable_placement_id": 0, 00:03:31.607 "enable_zerocopy_send_server": true, 00:03:31.607 "enable_zerocopy_send_client": false, 00:03:31.607 "zerocopy_threshold": 0, 00:03:31.607 "tls_version": 0, 00:03:31.607 "enable_ktls": false 00:03:31.607 } 00:03:31.607 } 00:03:31.607 ] 00:03:31.607 }, 00:03:31.607 { 00:03:31.607 "subsystem": "vmd", 00:03:31.607 "config": [] 00:03:31.607 }, 00:03:31.607 { 00:03:31.607 "subsystem": "accel", 00:03:31.607 "config": [ 00:03:31.607 { 00:03:31.607 "method": "accel_set_options", 00:03:31.607 "params": { 00:03:31.607 "small_cache_size": 128, 00:03:31.607 "large_cache_size": 16, 00:03:31.607 "task_count": 2048, 00:03:31.607 "sequence_count": 2048, 00:03:31.607 "buf_count": 2048 00:03:31.607 } 00:03:31.607 } 00:03:31.607 ] 00:03:31.607 }, 00:03:31.607 { 00:03:31.607 "subsystem": "bdev", 00:03:31.607 "config": [ 00:03:31.607 { 00:03:31.607 "method": "bdev_set_options", 00:03:31.607 "params": { 00:03:31.607 "bdev_io_pool_size": 65535, 00:03:31.607 "bdev_io_cache_size": 256, 00:03:31.607 "bdev_auto_examine": true, 00:03:31.607 "iobuf_small_cache_size": 128, 00:03:31.607 "iobuf_large_cache_size": 16 00:03:31.607 } 00:03:31.607 }, 00:03:31.607 { 00:03:31.607 "method": "bdev_raid_set_options", 00:03:31.607 "params": { 00:03:31.607 "process_window_size_kb": 1024 00:03:31.607 } 00:03:31.607 }, 00:03:31.607 { 00:03:31.607 "method": "bdev_iscsi_set_options", 00:03:31.607 "params": { 00:03:31.607 "timeout_sec": 30 00:03:31.607 } 00:03:31.607 }, 00:03:31.607 { 00:03:31.607 "method": "bdev_nvme_set_options", 00:03:31.607 "params": { 00:03:31.607 "action_on_timeout": "none", 00:03:31.607 "timeout_us": 0, 00:03:31.607 "timeout_admin_us": 0, 00:03:31.607 "keep_alive_timeout_ms": 10000, 00:03:31.607 "arbitration_burst": 0, 00:03:31.607 "low_priority_weight": 0, 00:03:31.607 "medium_priority_weight": 0, 00:03:31.607 "high_priority_weight": 0, 00:03:31.607 "nvme_adminq_poll_period_us": 10000, 00:03:31.607 "nvme_ioq_poll_period_us": 0, 00:03:31.607 "io_queue_requests": 0, 00:03:31.607 "delay_cmd_submit": true, 00:03:31.607 "transport_retry_count": 4, 00:03:31.607 "bdev_retry_count": 3, 00:03:31.607 "transport_ack_timeout": 0, 00:03:31.607 "ctrlr_loss_timeout_sec": 0, 00:03:31.607 "reconnect_delay_sec": 0, 00:03:31.607 "fast_io_fail_timeout_sec": 0, 00:03:31.607 "disable_auto_failback": false, 00:03:31.607 "generate_uuids": false, 00:03:31.607 "transport_tos": 0, 00:03:31.607 "nvme_error_stat": false, 00:03:31.607 "rdma_srq_size": 0, 00:03:31.607 "io_path_stat": false, 00:03:31.607 "allow_accel_sequence": false, 00:03:31.607 "rdma_max_cq_size": 0, 00:03:31.607 "rdma_cm_event_timeout_ms": 0, 00:03:31.607 "dhchap_digests": [ 00:03:31.607 "sha256", 00:03:31.607 "sha384", 00:03:31.607 "sha512" 00:03:31.607 ], 00:03:31.607 "dhchap_dhgroups": [ 00:03:31.607 "null", 00:03:31.607 "ffdhe2048", 00:03:31.607 "ffdhe3072", 00:03:31.607 "ffdhe4096", 00:03:31.607 "ffdhe6144", 00:03:31.607 "ffdhe8192" 00:03:31.607 ] 00:03:31.607 } 00:03:31.608 }, 00:03:31.608 { 00:03:31.608 "method": "bdev_nvme_set_hotplug", 00:03:31.608 "params": { 00:03:31.608 "period_us": 100000, 00:03:31.608 "enable": false 00:03:31.608 } 00:03:31.608 }, 00:03:31.608 { 00:03:31.608 "method": "bdev_wait_for_examine" 00:03:31.608 } 00:03:31.608 ] 00:03:31.608 }, 00:03:31.608 { 00:03:31.608 "subsystem": "scsi", 00:03:31.608 "config": null 00:03:31.608 }, 00:03:31.608 { 00:03:31.608 "subsystem": "scheduler", 00:03:31.608 "config": [ 00:03:31.608 { 00:03:31.608 "method": "framework_set_scheduler", 00:03:31.608 "params": { 00:03:31.608 "name": "static" 00:03:31.608 } 00:03:31.608 } 00:03:31.608 ] 00:03:31.608 }, 00:03:31.608 { 00:03:31.608 "subsystem": "vhost_scsi", 00:03:31.608 "config": [] 00:03:31.608 }, 00:03:31.608 { 00:03:31.608 "subsystem": "vhost_blk", 00:03:31.608 "config": [] 00:03:31.608 }, 00:03:31.608 { 00:03:31.608 "subsystem": "ublk", 00:03:31.608 "config": [] 00:03:31.608 }, 00:03:31.608 { 00:03:31.608 "subsystem": "nbd", 00:03:31.608 "config": [] 00:03:31.608 }, 00:03:31.608 { 00:03:31.608 "subsystem": "nvmf", 00:03:31.608 "config": [ 00:03:31.608 { 00:03:31.608 "method": "nvmf_set_config", 00:03:31.608 "params": { 00:03:31.608 "discovery_filter": "match_any", 00:03:31.608 "admin_cmd_passthru": { 00:03:31.608 "identify_ctrlr": false 00:03:31.608 } 00:03:31.608 } 00:03:31.608 }, 00:03:31.608 { 00:03:31.608 "method": "nvmf_set_max_subsystems", 00:03:31.608 "params": { 00:03:31.608 "max_subsystems": 1024 00:03:31.608 } 00:03:31.608 }, 00:03:31.608 { 00:03:31.608 "method": "nvmf_set_crdt", 00:03:31.608 "params": { 00:03:31.608 "crdt1": 0, 00:03:31.608 "crdt2": 0, 00:03:31.608 "crdt3": 0 00:03:31.608 } 00:03:31.608 }, 00:03:31.608 { 00:03:31.608 "method": "nvmf_create_transport", 00:03:31.608 "params": { 00:03:31.608 "trtype": "TCP", 00:03:31.608 "max_queue_depth": 128, 00:03:31.608 "max_io_qpairs_per_ctrlr": 127, 00:03:31.608 "in_capsule_data_size": 4096, 00:03:31.608 "max_io_size": 131072, 00:03:31.608 "io_unit_size": 131072, 00:03:31.608 "max_aq_depth": 128, 00:03:31.608 "num_shared_buffers": 511, 00:03:31.608 "buf_cache_size": 4294967295, 00:03:31.608 "dif_insert_or_strip": false, 00:03:31.608 "zcopy": false, 00:03:31.608 "c2h_success": true, 00:03:31.608 "sock_priority": 0, 00:03:31.608 "abort_timeout_sec": 1, 00:03:31.608 "ack_timeout": 0 00:03:31.608 } 00:03:31.608 } 00:03:31.608 ] 00:03:31.608 }, 00:03:31.608 { 00:03:31.608 "subsystem": "iscsi", 00:03:31.608 "config": [ 00:03:31.608 { 00:03:31.608 "method": "iscsi_set_options", 00:03:31.608 "params": { 00:03:31.608 "node_base": "iqn.2016-06.io.spdk", 00:03:31.608 "max_sessions": 128, 00:03:31.608 "max_connections_per_session": 2, 00:03:31.608 "max_queue_depth": 64, 00:03:31.608 "default_time2wait": 2, 00:03:31.608 "default_time2retain": 20, 00:03:31.608 "first_burst_length": 8192, 00:03:31.608 "immediate_data": true, 00:03:31.608 "allow_duplicated_isid": false, 00:03:31.608 "error_recovery_level": 0, 00:03:31.608 "nop_timeout": 60, 00:03:31.608 "nop_in_interval": 30, 00:03:31.608 "disable_chap": false, 00:03:31.608 "require_chap": false, 00:03:31.608 "mutual_chap": false, 00:03:31.608 "chap_group": 0, 00:03:31.608 "max_large_datain_per_connection": 64, 00:03:31.608 "max_r2t_per_connection": 4, 00:03:31.608 "pdu_pool_size": 36864, 00:03:31.608 "immediate_data_pool_size": 16384, 00:03:31.608 "data_out_pool_size": 2048 00:03:31.608 } 00:03:31.608 } 00:03:31.608 ] 00:03:31.608 } 00:03:31.608 ] 00:03:31.608 } 00:03:31.608 21:17:57 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:31.608 21:17:57 -- rpc/skip_rpc.sh@40 -- # killprocess 2481084 00:03:31.608 21:17:57 -- common/autotest_common.sh@936 -- # '[' -z 2481084 ']' 00:03:31.608 21:17:57 -- common/autotest_common.sh@940 -- # kill -0 2481084 00:03:31.608 21:17:57 -- common/autotest_common.sh@941 -- # uname 00:03:31.608 21:17:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:31.608 21:17:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2481084 00:03:31.608 21:17:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:31.608 21:17:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:31.608 21:17:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2481084' 00:03:31.608 killing process with pid 2481084 00:03:31.608 21:17:57 -- common/autotest_common.sh@955 -- # kill 2481084 00:03:31.608 21:17:57 -- common/autotest_common.sh@960 -- # wait 2481084 00:03:32.174 21:17:57 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2481230 00:03:32.174 21:17:57 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:32.174 21:17:57 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:37.436 21:18:02 -- rpc/skip_rpc.sh@50 -- # killprocess 2481230 00:03:37.436 21:18:02 -- common/autotest_common.sh@936 -- # '[' -z 2481230 ']' 00:03:37.436 21:18:02 -- common/autotest_common.sh@940 -- # kill -0 2481230 00:03:37.436 21:18:02 -- common/autotest_common.sh@941 -- # uname 00:03:37.436 21:18:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:37.436 21:18:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2481230 00:03:37.436 21:18:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:37.436 21:18:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:37.436 21:18:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2481230' 00:03:37.436 killing process with pid 2481230 00:03:37.436 21:18:02 -- common/autotest_common.sh@955 -- # kill 2481230 00:03:37.436 21:18:02 -- common/autotest_common.sh@960 -- # wait 2481230 00:03:37.436 21:18:03 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:37.436 21:18:03 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:37.436 00:03:37.436 real 0m6.621s 00:03:37.436 user 0m6.213s 00:03:37.436 sys 0m0.680s 00:03:37.436 21:18:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:37.436 21:18:03 -- common/autotest_common.sh@10 -- # set +x 00:03:37.436 ************************************ 00:03:37.436 END TEST skip_rpc_with_json 00:03:37.436 ************************************ 00:03:37.436 21:18:03 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:37.436 21:18:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:37.436 21:18:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:37.436 21:18:03 -- common/autotest_common.sh@10 -- # set +x 00:03:37.694 ************************************ 00:03:37.694 START TEST skip_rpc_with_delay 00:03:37.694 ************************************ 00:03:37.694 21:18:03 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:03:37.694 21:18:03 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:37.694 21:18:03 -- common/autotest_common.sh@638 -- # local es=0 00:03:37.694 21:18:03 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:37.694 21:18:03 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.694 21:18:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:37.694 21:18:03 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.694 21:18:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:37.694 21:18:03 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.694 21:18:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:37.694 21:18:03 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.694 21:18:03 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:37.695 21:18:03 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:37.695 [2024-04-24 21:18:03.227027] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:37.695 [2024-04-24 21:18:03.227142] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:37.695 21:18:03 -- common/autotest_common.sh@641 -- # es=1 00:03:37.695 21:18:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:03:37.695 21:18:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:03:37.695 21:18:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:03:37.695 00:03:37.695 real 0m0.067s 00:03:37.695 user 0m0.045s 00:03:37.695 sys 0m0.022s 00:03:37.695 21:18:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:37.695 21:18:03 -- common/autotest_common.sh@10 -- # set +x 00:03:37.695 ************************************ 00:03:37.695 END TEST skip_rpc_with_delay 00:03:37.695 ************************************ 00:03:37.695 21:18:03 -- rpc/skip_rpc.sh@77 -- # uname 00:03:37.695 21:18:03 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:37.695 21:18:03 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:37.695 21:18:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:37.695 21:18:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:37.695 21:18:03 -- common/autotest_common.sh@10 -- # set +x 00:03:37.695 ************************************ 00:03:37.695 START TEST exit_on_failed_rpc_init 00:03:37.695 ************************************ 00:03:37.695 21:18:03 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:03:37.695 21:18:03 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2482072 00:03:37.695 21:18:03 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:37.695 21:18:03 -- rpc/skip_rpc.sh@63 -- # waitforlisten 2482072 00:03:37.695 21:18:03 -- common/autotest_common.sh@817 -- # '[' -z 2482072 ']' 00:03:37.695 21:18:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:37.695 21:18:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:37.695 21:18:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:37.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:37.695 21:18:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:37.695 21:18:03 -- common/autotest_common.sh@10 -- # set +x 00:03:37.953 [2024-04-24 21:18:03.414162] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:03:37.953 [2024-04-24 21:18:03.414250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2482072 ] 00:03:37.953 EAL: No free 2048 kB hugepages reported on node 1 00:03:37.953 [2024-04-24 21:18:03.474775] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:37.953 [2024-04-24 21:18:03.583153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:38.212 21:18:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:38.212 21:18:03 -- common/autotest_common.sh@850 -- # return 0 00:03:38.212 21:18:03 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:38.212 21:18:03 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:38.212 21:18:03 -- common/autotest_common.sh@638 -- # local es=0 00:03:38.212 21:18:03 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:38.212 21:18:03 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.212 21:18:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:38.212 21:18:03 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.212 21:18:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:38.212 21:18:03 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.212 21:18:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:38.212 21:18:03 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.212 21:18:03 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:38.212 21:18:03 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:38.212 [2024-04-24 21:18:03.884274] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:03:38.212 [2024-04-24 21:18:03.884363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2482086 ] 00:03:38.471 EAL: No free 2048 kB hugepages reported on node 1 00:03:38.471 [2024-04-24 21:18:03.945775] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:38.471 [2024-04-24 21:18:04.060468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:38.471 [2024-04-24 21:18:04.060590] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:38.471 [2024-04-24 21:18:04.060613] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:38.471 [2024-04-24 21:18:04.060634] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:38.729 21:18:04 -- common/autotest_common.sh@641 -- # es=234 00:03:38.729 21:18:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:03:38.729 21:18:04 -- common/autotest_common.sh@650 -- # es=106 00:03:38.729 21:18:04 -- common/autotest_common.sh@651 -- # case "$es" in 00:03:38.729 21:18:04 -- common/autotest_common.sh@658 -- # es=1 00:03:38.729 21:18:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:03:38.729 21:18:04 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:38.729 21:18:04 -- rpc/skip_rpc.sh@70 -- # killprocess 2482072 00:03:38.729 21:18:04 -- common/autotest_common.sh@936 -- # '[' -z 2482072 ']' 00:03:38.729 21:18:04 -- common/autotest_common.sh@940 -- # kill -0 2482072 00:03:38.729 21:18:04 -- common/autotest_common.sh@941 -- # uname 00:03:38.729 21:18:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:38.729 21:18:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2482072 00:03:38.729 21:18:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:38.729 21:18:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:38.729 21:18:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2482072' 00:03:38.729 killing process with pid 2482072 00:03:38.729 21:18:04 -- common/autotest_common.sh@955 -- # kill 2482072 00:03:38.729 21:18:04 -- common/autotest_common.sh@960 -- # wait 2482072 00:03:38.989 00:03:38.989 real 0m1.297s 00:03:38.989 user 0m1.465s 00:03:38.989 sys 0m0.442s 00:03:38.989 21:18:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:38.989 21:18:04 -- common/autotest_common.sh@10 -- # set +x 00:03:38.989 ************************************ 00:03:38.989 END TEST exit_on_failed_rpc_init 00:03:38.989 ************************************ 00:03:39.248 21:18:04 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:39.248 00:03:39.248 real 0m14.016s 00:03:39.248 user 0m13.096s 00:03:39.248 sys 0m1.775s 00:03:39.248 21:18:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:39.248 21:18:04 -- common/autotest_common.sh@10 -- # set +x 00:03:39.248 ************************************ 00:03:39.248 END TEST skip_rpc 00:03:39.248 ************************************ 00:03:39.248 21:18:04 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:39.248 21:18:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:39.248 21:18:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:39.248 21:18:04 -- common/autotest_common.sh@10 -- # set +x 00:03:39.248 ************************************ 00:03:39.248 START TEST rpc_client 00:03:39.248 ************************************ 00:03:39.248 21:18:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:39.248 * Looking for test storage... 00:03:39.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:39.248 21:18:04 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:39.248 OK 00:03:39.248 21:18:04 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:39.248 00:03:39.248 real 0m0.070s 00:03:39.248 user 0m0.032s 00:03:39.248 sys 0m0.043s 00:03:39.248 21:18:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:39.248 21:18:04 -- common/autotest_common.sh@10 -- # set +x 00:03:39.248 ************************************ 00:03:39.248 END TEST rpc_client 00:03:39.248 ************************************ 00:03:39.248 21:18:04 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:39.248 21:18:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:39.248 21:18:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:39.248 21:18:04 -- common/autotest_common.sh@10 -- # set +x 00:03:39.507 ************************************ 00:03:39.507 START TEST json_config 00:03:39.507 ************************************ 00:03:39.507 21:18:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:39.507 21:18:05 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:39.507 21:18:05 -- nvmf/common.sh@7 -- # uname -s 00:03:39.507 21:18:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:39.507 21:18:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:39.507 21:18:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:39.507 21:18:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:39.507 21:18:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:39.508 21:18:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:39.508 21:18:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:39.508 21:18:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:39.508 21:18:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:39.508 21:18:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:39.508 21:18:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:39.508 21:18:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:39.508 21:18:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:39.508 21:18:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:39.508 21:18:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:39.508 21:18:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:39.508 21:18:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:39.508 21:18:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:39.508 21:18:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:39.508 21:18:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:39.508 21:18:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.508 21:18:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.508 21:18:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.508 21:18:05 -- paths/export.sh@5 -- # export PATH 00:03:39.508 21:18:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.508 21:18:05 -- nvmf/common.sh@47 -- # : 0 00:03:39.508 21:18:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:39.508 21:18:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:39.508 21:18:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:39.508 21:18:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:39.508 21:18:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:39.508 21:18:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:39.508 21:18:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:39.508 21:18:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:39.508 21:18:05 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:39.508 21:18:05 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:39.508 21:18:05 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:39.508 21:18:05 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:39.508 21:18:05 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:39.508 21:18:05 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:39.508 21:18:05 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:39.508 21:18:05 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:39.508 21:18:05 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:39.508 21:18:05 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:39.508 21:18:05 -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:39.508 21:18:05 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:39.508 21:18:05 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:39.508 21:18:05 -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:39.508 21:18:05 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:39.508 21:18:05 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:03:39.508 INFO: JSON configuration test init 00:03:39.508 21:18:05 -- json_config/json_config.sh@357 -- # json_config_test_init 00:03:39.508 21:18:05 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:03:39.508 21:18:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:39.508 21:18:05 -- common/autotest_common.sh@10 -- # set +x 00:03:39.508 21:18:05 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:03:39.508 21:18:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:39.508 21:18:05 -- common/autotest_common.sh@10 -- # set +x 00:03:39.508 21:18:05 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:03:39.508 21:18:05 -- json_config/common.sh@9 -- # local app=target 00:03:39.508 21:18:05 -- json_config/common.sh@10 -- # shift 00:03:39.508 21:18:05 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:39.508 21:18:05 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:39.508 21:18:05 -- json_config/common.sh@15 -- # local app_extra_params= 00:03:39.508 21:18:05 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:39.508 21:18:05 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:39.508 21:18:05 -- json_config/common.sh@22 -- # app_pid["$app"]=2482343 00:03:39.508 21:18:05 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:39.508 21:18:05 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:39.508 Waiting for target to run... 00:03:39.508 21:18:05 -- json_config/common.sh@25 -- # waitforlisten 2482343 /var/tmp/spdk_tgt.sock 00:03:39.508 21:18:05 -- common/autotest_common.sh@817 -- # '[' -z 2482343 ']' 00:03:39.508 21:18:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:39.508 21:18:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:39.508 21:18:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:39.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:39.508 21:18:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:39.508 21:18:05 -- common/autotest_common.sh@10 -- # set +x 00:03:39.508 [2024-04-24 21:18:05.119007] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:03:39.508 [2024-04-24 21:18:05.119109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2482343 ] 00:03:39.508 EAL: No free 2048 kB hugepages reported on node 1 00:03:40.076 [2024-04-24 21:18:05.612080] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:40.076 [2024-04-24 21:18:05.717449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:40.641 21:18:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:40.641 21:18:06 -- common/autotest_common.sh@850 -- # return 0 00:03:40.641 21:18:06 -- json_config/common.sh@26 -- # echo '' 00:03:40.641 00:03:40.641 21:18:06 -- json_config/json_config.sh@269 -- # create_accel_config 00:03:40.641 21:18:06 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:03:40.641 21:18:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:40.641 21:18:06 -- common/autotest_common.sh@10 -- # set +x 00:03:40.641 21:18:06 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:03:40.641 21:18:06 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:03:40.641 21:18:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:40.641 21:18:06 -- common/autotest_common.sh@10 -- # set +x 00:03:40.641 21:18:06 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:40.641 21:18:06 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:03:40.641 21:18:06 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:43.924 21:18:09 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:03:43.924 21:18:09 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:43.924 21:18:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:43.924 21:18:09 -- common/autotest_common.sh@10 -- # set +x 00:03:43.924 21:18:09 -- json_config/json_config.sh@45 -- # local ret=0 00:03:43.924 21:18:09 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:43.924 21:18:09 -- json_config/json_config.sh@46 -- # local enabled_types 00:03:43.924 21:18:09 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:03:43.924 21:18:09 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:03:43.924 21:18:09 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:43.924 21:18:09 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:43.924 21:18:09 -- json_config/json_config.sh@48 -- # local get_types 00:03:43.924 21:18:09 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:43.924 21:18:09 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:03:43.924 21:18:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:43.924 21:18:09 -- common/autotest_common.sh@10 -- # set +x 00:03:43.924 21:18:09 -- json_config/json_config.sh@55 -- # return 0 00:03:43.924 21:18:09 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:03:43.924 21:18:09 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:03:43.924 21:18:09 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:03:43.924 21:18:09 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:03:43.924 21:18:09 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:03:43.924 21:18:09 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:03:43.925 21:18:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:43.925 21:18:09 -- common/autotest_common.sh@10 -- # set +x 00:03:43.925 21:18:09 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:43.925 21:18:09 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:03:43.925 21:18:09 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:03:43.925 21:18:09 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:43.925 21:18:09 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:44.183 MallocForNvmf0 00:03:44.183 21:18:09 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:44.183 21:18:09 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:44.441 MallocForNvmf1 00:03:44.441 21:18:10 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:44.441 21:18:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:44.699 [2024-04-24 21:18:10.265373] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:44.699 21:18:10 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:44.699 21:18:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:44.957 21:18:10 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:44.957 21:18:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:45.215 21:18:10 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:45.215 21:18:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:45.473 21:18:11 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:45.473 21:18:11 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:45.731 [2024-04-24 21:18:11.240505] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:45.731 21:18:11 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:03:45.731 21:18:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:45.731 21:18:11 -- common/autotest_common.sh@10 -- # set +x 00:03:45.731 21:18:11 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:03:45.731 21:18:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:45.731 21:18:11 -- common/autotest_common.sh@10 -- # set +x 00:03:45.731 21:18:11 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:03:45.731 21:18:11 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:45.731 21:18:11 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:45.990 MallocBdevForConfigChangeCheck 00:03:45.990 21:18:11 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:03:45.990 21:18:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:45.990 21:18:11 -- common/autotest_common.sh@10 -- # set +x 00:03:45.990 21:18:11 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:03:45.990 21:18:11 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:46.560 21:18:11 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:03:46.560 INFO: shutting down applications... 00:03:46.560 21:18:11 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:03:46.560 21:18:11 -- json_config/json_config.sh@368 -- # json_config_clear target 00:03:46.560 21:18:11 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:03:46.560 21:18:11 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:47.932 Calling clear_iscsi_subsystem 00:03:47.932 Calling clear_nvmf_subsystem 00:03:47.932 Calling clear_nbd_subsystem 00:03:47.932 Calling clear_ublk_subsystem 00:03:47.932 Calling clear_vhost_blk_subsystem 00:03:47.932 Calling clear_vhost_scsi_subsystem 00:03:47.932 Calling clear_bdev_subsystem 00:03:47.932 21:18:13 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:47.932 21:18:13 -- json_config/json_config.sh@343 -- # count=100 00:03:47.932 21:18:13 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:03:47.932 21:18:13 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:47.932 21:18:13 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:47.932 21:18:13 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:48.498 21:18:13 -- json_config/json_config.sh@345 -- # break 00:03:48.498 21:18:13 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:03:48.498 21:18:13 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:03:48.498 21:18:13 -- json_config/common.sh@31 -- # local app=target 00:03:48.498 21:18:13 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:48.498 21:18:13 -- json_config/common.sh@35 -- # [[ -n 2482343 ]] 00:03:48.498 21:18:13 -- json_config/common.sh@38 -- # kill -SIGINT 2482343 00:03:48.498 21:18:13 -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:48.498 21:18:13 -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:48.498 21:18:13 -- json_config/common.sh@41 -- # kill -0 2482343 00:03:48.498 21:18:13 -- json_config/common.sh@45 -- # sleep 0.5 00:03:49.065 21:18:14 -- json_config/common.sh@40 -- # (( i++ )) 00:03:49.065 21:18:14 -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:49.065 21:18:14 -- json_config/common.sh@41 -- # kill -0 2482343 00:03:49.065 21:18:14 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:49.065 21:18:14 -- json_config/common.sh@43 -- # break 00:03:49.065 21:18:14 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:49.065 21:18:14 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:49.065 SPDK target shutdown done 00:03:49.065 21:18:14 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:03:49.065 INFO: relaunching applications... 00:03:49.065 21:18:14 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:49.065 21:18:14 -- json_config/common.sh@9 -- # local app=target 00:03:49.066 21:18:14 -- json_config/common.sh@10 -- # shift 00:03:49.066 21:18:14 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:49.066 21:18:14 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:49.066 21:18:14 -- json_config/common.sh@15 -- # local app_extra_params= 00:03:49.066 21:18:14 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:49.066 21:18:14 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:49.066 21:18:14 -- json_config/common.sh@22 -- # app_pid["$app"]=2484159 00:03:49.066 21:18:14 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:49.066 21:18:14 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:49.066 Waiting for target to run... 00:03:49.066 21:18:14 -- json_config/common.sh@25 -- # waitforlisten 2484159 /var/tmp/spdk_tgt.sock 00:03:49.066 21:18:14 -- common/autotest_common.sh@817 -- # '[' -z 2484159 ']' 00:03:49.066 21:18:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:49.066 21:18:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:49.066 21:18:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:49.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:49.066 21:18:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:49.066 21:18:14 -- common/autotest_common.sh@10 -- # set +x 00:03:49.066 [2024-04-24 21:18:14.521116] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:03:49.066 [2024-04-24 21:18:14.521226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484159 ] 00:03:49.066 EAL: No free 2048 kB hugepages reported on node 1 00:03:49.633 [2024-04-24 21:18:15.053517] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.633 [2024-04-24 21:18:15.157637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.917 [2024-04-24 21:18:18.194479] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:52.917 [2024-04-24 21:18:18.226954] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:53.482 21:18:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:53.482 21:18:18 -- common/autotest_common.sh@850 -- # return 0 00:03:53.482 21:18:18 -- json_config/common.sh@26 -- # echo '' 00:03:53.482 00:03:53.482 21:18:18 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:03:53.482 21:18:18 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:53.482 INFO: Checking if target configuration is the same... 00:03:53.482 21:18:18 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:53.482 21:18:18 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:03:53.482 21:18:18 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:53.482 + '[' 2 -ne 2 ']' 00:03:53.482 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:53.482 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:53.482 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:53.482 +++ basename /dev/fd/62 00:03:53.482 ++ mktemp /tmp/62.XXX 00:03:53.482 + tmp_file_1=/tmp/62.U8b 00:03:53.482 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:53.482 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:53.482 + tmp_file_2=/tmp/spdk_tgt_config.json.7bz 00:03:53.482 + ret=0 00:03:53.482 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:53.740 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:53.740 + diff -u /tmp/62.U8b /tmp/spdk_tgt_config.json.7bz 00:03:53.740 + echo 'INFO: JSON config files are the same' 00:03:53.740 INFO: JSON config files are the same 00:03:53.740 + rm /tmp/62.U8b /tmp/spdk_tgt_config.json.7bz 00:03:53.740 + exit 0 00:03:53.740 21:18:19 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:03:53.740 21:18:19 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:53.740 INFO: changing configuration and checking if this can be detected... 00:03:53.740 21:18:19 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:53.740 21:18:19 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:53.998 21:18:19 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:53.998 21:18:19 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:03:53.998 21:18:19 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:53.998 + '[' 2 -ne 2 ']' 00:03:53.998 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:53.998 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:53.998 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:53.998 +++ basename /dev/fd/62 00:03:53.998 ++ mktemp /tmp/62.XXX 00:03:53.998 + tmp_file_1=/tmp/62.GDu 00:03:53.998 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:53.998 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:53.998 + tmp_file_2=/tmp/spdk_tgt_config.json.lu9 00:03:53.998 + ret=0 00:03:53.998 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:54.564 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:54.564 + diff -u /tmp/62.GDu /tmp/spdk_tgt_config.json.lu9 00:03:54.564 + ret=1 00:03:54.564 + echo '=== Start of file: /tmp/62.GDu ===' 00:03:54.564 + cat /tmp/62.GDu 00:03:54.564 + echo '=== End of file: /tmp/62.GDu ===' 00:03:54.564 + echo '' 00:03:54.564 + echo '=== Start of file: /tmp/spdk_tgt_config.json.lu9 ===' 00:03:54.564 + cat /tmp/spdk_tgt_config.json.lu9 00:03:54.564 + echo '=== End of file: /tmp/spdk_tgt_config.json.lu9 ===' 00:03:54.564 + echo '' 00:03:54.564 + rm /tmp/62.GDu /tmp/spdk_tgt_config.json.lu9 00:03:54.564 + exit 1 00:03:54.564 21:18:20 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:03:54.564 INFO: configuration change detected. 00:03:54.564 21:18:20 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:03:54.564 21:18:20 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:03:54.564 21:18:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:54.564 21:18:20 -- common/autotest_common.sh@10 -- # set +x 00:03:54.564 21:18:20 -- json_config/json_config.sh@307 -- # local ret=0 00:03:54.564 21:18:20 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:03:54.564 21:18:20 -- json_config/json_config.sh@317 -- # [[ -n 2484159 ]] 00:03:54.564 21:18:20 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:03:54.564 21:18:20 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:03:54.564 21:18:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:54.564 21:18:20 -- common/autotest_common.sh@10 -- # set +x 00:03:54.564 21:18:20 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:03:54.564 21:18:20 -- json_config/json_config.sh@193 -- # uname -s 00:03:54.564 21:18:20 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:03:54.564 21:18:20 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:03:54.564 21:18:20 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:03:54.564 21:18:20 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:03:54.564 21:18:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:54.564 21:18:20 -- common/autotest_common.sh@10 -- # set +x 00:03:54.564 21:18:20 -- json_config/json_config.sh@323 -- # killprocess 2484159 00:03:54.564 21:18:20 -- common/autotest_common.sh@936 -- # '[' -z 2484159 ']' 00:03:54.564 21:18:20 -- common/autotest_common.sh@940 -- # kill -0 2484159 00:03:54.564 21:18:20 -- common/autotest_common.sh@941 -- # uname 00:03:54.564 21:18:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:54.564 21:18:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2484159 00:03:54.564 21:18:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:54.564 21:18:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:54.564 21:18:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2484159' 00:03:54.564 killing process with pid 2484159 00:03:54.564 21:18:20 -- common/autotest_common.sh@955 -- # kill 2484159 00:03:54.564 21:18:20 -- common/autotest_common.sh@960 -- # wait 2484159 00:03:56.464 21:18:21 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:56.464 21:18:21 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:03:56.464 21:18:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:56.464 21:18:21 -- common/autotest_common.sh@10 -- # set +x 00:03:56.464 21:18:21 -- json_config/json_config.sh@328 -- # return 0 00:03:56.464 21:18:21 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:03:56.464 INFO: Success 00:03:56.464 00:03:56.464 real 0m16.755s 00:03:56.464 user 0m18.509s 00:03:56.464 sys 0m2.247s 00:03:56.464 21:18:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:56.464 21:18:21 -- common/autotest_common.sh@10 -- # set +x 00:03:56.464 ************************************ 00:03:56.464 END TEST json_config 00:03:56.464 ************************************ 00:03:56.464 21:18:21 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:56.464 21:18:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.464 21:18:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.464 21:18:21 -- common/autotest_common.sh@10 -- # set +x 00:03:56.464 ************************************ 00:03:56.464 START TEST json_config_extra_key 00:03:56.464 ************************************ 00:03:56.464 21:18:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:56.464 21:18:21 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:56.464 21:18:21 -- nvmf/common.sh@7 -- # uname -s 00:03:56.464 21:18:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:56.464 21:18:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:56.464 21:18:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:56.464 21:18:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:56.464 21:18:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:56.464 21:18:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:56.464 21:18:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:56.464 21:18:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:56.464 21:18:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:56.464 21:18:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:56.464 21:18:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:56.464 21:18:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:56.464 21:18:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:56.464 21:18:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:56.464 21:18:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:56.464 21:18:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:56.464 21:18:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:56.464 21:18:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:56.464 21:18:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:56.464 21:18:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:56.464 21:18:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.464 21:18:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.464 21:18:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.464 21:18:21 -- paths/export.sh@5 -- # export PATH 00:03:56.464 21:18:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.464 21:18:21 -- nvmf/common.sh@47 -- # : 0 00:03:56.464 21:18:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:56.464 21:18:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:56.464 21:18:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:56.464 21:18:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:56.464 21:18:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:56.464 21:18:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:56.464 21:18:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:56.464 21:18:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:56.464 21:18:21 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:56.464 21:18:21 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:56.464 21:18:21 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:56.464 21:18:21 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:56.464 21:18:21 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:56.464 21:18:21 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:56.464 21:18:21 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:56.464 21:18:21 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:56.464 21:18:21 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:56.464 21:18:21 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:56.464 21:18:21 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:56.464 INFO: launching applications... 00:03:56.464 21:18:21 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:56.464 21:18:21 -- json_config/common.sh@9 -- # local app=target 00:03:56.464 21:18:21 -- json_config/common.sh@10 -- # shift 00:03:56.464 21:18:21 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:56.464 21:18:21 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:56.464 21:18:21 -- json_config/common.sh@15 -- # local app_extra_params= 00:03:56.464 21:18:21 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:56.464 21:18:21 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:56.464 21:18:21 -- json_config/common.sh@22 -- # app_pid["$app"]=2485091 00:03:56.464 21:18:21 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:56.464 Waiting for target to run... 00:03:56.464 21:18:21 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:56.464 21:18:21 -- json_config/common.sh@25 -- # waitforlisten 2485091 /var/tmp/spdk_tgt.sock 00:03:56.464 21:18:21 -- common/autotest_common.sh@817 -- # '[' -z 2485091 ']' 00:03:56.464 21:18:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:56.464 21:18:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:56.464 21:18:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:56.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:56.464 21:18:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:56.464 21:18:21 -- common/autotest_common.sh@10 -- # set +x 00:03:56.464 [2024-04-24 21:18:21.987446] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:03:56.464 [2024-04-24 21:18:21.987545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2485091 ] 00:03:56.464 EAL: No free 2048 kB hugepages reported on node 1 00:03:57.031 [2024-04-24 21:18:22.487175] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.031 [2024-04-24 21:18:22.592218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.289 21:18:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:57.289 21:18:22 -- common/autotest_common.sh@850 -- # return 0 00:03:57.289 21:18:22 -- json_config/common.sh@26 -- # echo '' 00:03:57.289 00:03:57.289 21:18:22 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:57.289 INFO: shutting down applications... 00:03:57.289 21:18:22 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:57.289 21:18:22 -- json_config/common.sh@31 -- # local app=target 00:03:57.289 21:18:22 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:57.289 21:18:22 -- json_config/common.sh@35 -- # [[ -n 2485091 ]] 00:03:57.289 21:18:22 -- json_config/common.sh@38 -- # kill -SIGINT 2485091 00:03:57.289 21:18:22 -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:57.289 21:18:22 -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:57.289 21:18:22 -- json_config/common.sh@41 -- # kill -0 2485091 00:03:57.289 21:18:22 -- json_config/common.sh@45 -- # sleep 0.5 00:03:57.856 21:18:23 -- json_config/common.sh@40 -- # (( i++ )) 00:03:57.856 21:18:23 -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:57.856 21:18:23 -- json_config/common.sh@41 -- # kill -0 2485091 00:03:57.856 21:18:23 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:57.856 21:18:23 -- json_config/common.sh@43 -- # break 00:03:57.856 21:18:23 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:57.856 21:18:23 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:57.856 SPDK target shutdown done 00:03:57.856 21:18:23 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:57.856 Success 00:03:57.856 00:03:57.856 real 0m1.546s 00:03:57.856 user 0m1.406s 00:03:57.856 sys 0m0.579s 00:03:57.856 21:18:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:57.856 21:18:23 -- common/autotest_common.sh@10 -- # set +x 00:03:57.856 ************************************ 00:03:57.856 END TEST json_config_extra_key 00:03:57.856 ************************************ 00:03:57.856 21:18:23 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:57.856 21:18:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:57.856 21:18:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:57.856 21:18:23 -- common/autotest_common.sh@10 -- # set +x 00:03:58.115 ************************************ 00:03:58.115 START TEST alias_rpc 00:03:58.115 ************************************ 00:03:58.115 21:18:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:58.115 * Looking for test storage... 00:03:58.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:03:58.115 21:18:23 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:58.115 21:18:23 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2485405 00:03:58.115 21:18:23 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:58.115 21:18:23 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2485405 00:03:58.115 21:18:23 -- common/autotest_common.sh@817 -- # '[' -z 2485405 ']' 00:03:58.115 21:18:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:58.115 21:18:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:58.115 21:18:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:58.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:58.115 21:18:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:58.115 21:18:23 -- common/autotest_common.sh@10 -- # set +x 00:03:58.115 [2024-04-24 21:18:23.652942] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:03:58.115 [2024-04-24 21:18:23.653036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2485405 ] 00:03:58.116 EAL: No free 2048 kB hugepages reported on node 1 00:03:58.116 [2024-04-24 21:18:23.709331] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.374 [2024-04-24 21:18:23.814213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.633 21:18:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:58.633 21:18:24 -- common/autotest_common.sh@850 -- # return 0 00:03:58.633 21:18:24 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:03:58.892 21:18:24 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2485405 00:03:58.892 21:18:24 -- common/autotest_common.sh@936 -- # '[' -z 2485405 ']' 00:03:58.892 21:18:24 -- common/autotest_common.sh@940 -- # kill -0 2485405 00:03:58.892 21:18:24 -- common/autotest_common.sh@941 -- # uname 00:03:58.892 21:18:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:58.892 21:18:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2485405 00:03:58.892 21:18:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:58.892 21:18:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:58.892 21:18:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2485405' 00:03:58.892 killing process with pid 2485405 00:03:58.892 21:18:24 -- common/autotest_common.sh@955 -- # kill 2485405 00:03:58.892 21:18:24 -- common/autotest_common.sh@960 -- # wait 2485405 00:03:59.459 00:03:59.459 real 0m1.320s 00:03:59.459 user 0m1.437s 00:03:59.459 sys 0m0.415s 00:03:59.459 21:18:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:59.459 21:18:24 -- common/autotest_common.sh@10 -- # set +x 00:03:59.459 ************************************ 00:03:59.459 END TEST alias_rpc 00:03:59.459 ************************************ 00:03:59.459 21:18:24 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:03:59.459 21:18:24 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:59.459 21:18:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.459 21:18:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.459 21:18:24 -- common/autotest_common.sh@10 -- # set +x 00:03:59.459 ************************************ 00:03:59.459 START TEST spdkcli_tcp 00:03:59.459 ************************************ 00:03:59.459 21:18:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:59.459 * Looking for test storage... 00:03:59.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:03:59.459 21:18:25 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:03:59.459 21:18:25 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:03:59.459 21:18:25 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:03:59.459 21:18:25 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:03:59.459 21:18:25 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:03:59.459 21:18:25 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:03:59.459 21:18:25 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:03:59.459 21:18:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:59.459 21:18:25 -- common/autotest_common.sh@10 -- # set +x 00:03:59.459 21:18:25 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2485603 00:03:59.459 21:18:25 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:03:59.459 21:18:25 -- spdkcli/tcp.sh@27 -- # waitforlisten 2485603 00:03:59.459 21:18:25 -- common/autotest_common.sh@817 -- # '[' -z 2485603 ']' 00:03:59.459 21:18:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.459 21:18:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:59.459 21:18:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.459 21:18:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:59.459 21:18:25 -- common/autotest_common.sh@10 -- # set +x 00:03:59.459 [2024-04-24 21:18:25.099789] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:03:59.459 [2024-04-24 21:18:25.099876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2485603 ] 00:03:59.459 EAL: No free 2048 kB hugepages reported on node 1 00:03:59.718 [2024-04-24 21:18:25.156922] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:59.718 [2024-04-24 21:18:25.261611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:59.718 [2024-04-24 21:18:25.261616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.976 21:18:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:59.976 21:18:25 -- common/autotest_common.sh@850 -- # return 0 00:03:59.976 21:18:25 -- spdkcli/tcp.sh@31 -- # socat_pid=2485661 00:03:59.976 21:18:25 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:03:59.976 21:18:25 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:00.234 [ 00:04:00.234 "bdev_malloc_delete", 00:04:00.234 "bdev_malloc_create", 00:04:00.234 "bdev_null_resize", 00:04:00.234 "bdev_null_delete", 00:04:00.234 "bdev_null_create", 00:04:00.234 "bdev_nvme_cuse_unregister", 00:04:00.234 "bdev_nvme_cuse_register", 00:04:00.234 "bdev_opal_new_user", 00:04:00.234 "bdev_opal_set_lock_state", 00:04:00.234 "bdev_opal_delete", 00:04:00.234 "bdev_opal_get_info", 00:04:00.234 "bdev_opal_create", 00:04:00.234 "bdev_nvme_opal_revert", 00:04:00.234 "bdev_nvme_opal_init", 00:04:00.234 "bdev_nvme_send_cmd", 00:04:00.234 "bdev_nvme_get_path_iostat", 00:04:00.234 "bdev_nvme_get_mdns_discovery_info", 00:04:00.234 "bdev_nvme_stop_mdns_discovery", 00:04:00.234 "bdev_nvme_start_mdns_discovery", 00:04:00.234 "bdev_nvme_set_multipath_policy", 00:04:00.234 "bdev_nvme_set_preferred_path", 00:04:00.234 "bdev_nvme_get_io_paths", 00:04:00.234 "bdev_nvme_remove_error_injection", 00:04:00.234 "bdev_nvme_add_error_injection", 00:04:00.234 "bdev_nvme_get_discovery_info", 00:04:00.234 "bdev_nvme_stop_discovery", 00:04:00.234 "bdev_nvme_start_discovery", 00:04:00.234 "bdev_nvme_get_controller_health_info", 00:04:00.234 "bdev_nvme_disable_controller", 00:04:00.234 "bdev_nvme_enable_controller", 00:04:00.234 "bdev_nvme_reset_controller", 00:04:00.234 "bdev_nvme_get_transport_statistics", 00:04:00.234 "bdev_nvme_apply_firmware", 00:04:00.234 "bdev_nvme_detach_controller", 00:04:00.234 "bdev_nvme_get_controllers", 00:04:00.234 "bdev_nvme_attach_controller", 00:04:00.234 "bdev_nvme_set_hotplug", 00:04:00.234 "bdev_nvme_set_options", 00:04:00.234 "bdev_passthru_delete", 00:04:00.234 "bdev_passthru_create", 00:04:00.234 "bdev_lvol_grow_lvstore", 00:04:00.234 "bdev_lvol_get_lvols", 00:04:00.234 "bdev_lvol_get_lvstores", 00:04:00.234 "bdev_lvol_delete", 00:04:00.234 "bdev_lvol_set_read_only", 00:04:00.234 "bdev_lvol_resize", 00:04:00.234 "bdev_lvol_decouple_parent", 00:04:00.234 "bdev_lvol_inflate", 00:04:00.234 "bdev_lvol_rename", 00:04:00.234 "bdev_lvol_clone_bdev", 00:04:00.234 "bdev_lvol_clone", 00:04:00.234 "bdev_lvol_snapshot", 00:04:00.234 "bdev_lvol_create", 00:04:00.234 "bdev_lvol_delete_lvstore", 00:04:00.234 "bdev_lvol_rename_lvstore", 00:04:00.234 "bdev_lvol_create_lvstore", 00:04:00.234 "bdev_raid_set_options", 00:04:00.234 "bdev_raid_remove_base_bdev", 00:04:00.234 "bdev_raid_add_base_bdev", 00:04:00.234 "bdev_raid_delete", 00:04:00.234 "bdev_raid_create", 00:04:00.234 "bdev_raid_get_bdevs", 00:04:00.234 "bdev_error_inject_error", 00:04:00.234 "bdev_error_delete", 00:04:00.234 "bdev_error_create", 00:04:00.234 "bdev_split_delete", 00:04:00.234 "bdev_split_create", 00:04:00.234 "bdev_delay_delete", 00:04:00.234 "bdev_delay_create", 00:04:00.234 "bdev_delay_update_latency", 00:04:00.234 "bdev_zone_block_delete", 00:04:00.234 "bdev_zone_block_create", 00:04:00.234 "blobfs_create", 00:04:00.234 "blobfs_detect", 00:04:00.234 "blobfs_set_cache_size", 00:04:00.234 "bdev_aio_delete", 00:04:00.234 "bdev_aio_rescan", 00:04:00.234 "bdev_aio_create", 00:04:00.234 "bdev_ftl_set_property", 00:04:00.234 "bdev_ftl_get_properties", 00:04:00.234 "bdev_ftl_get_stats", 00:04:00.234 "bdev_ftl_unmap", 00:04:00.234 "bdev_ftl_unload", 00:04:00.234 "bdev_ftl_delete", 00:04:00.234 "bdev_ftl_load", 00:04:00.234 "bdev_ftl_create", 00:04:00.234 "bdev_virtio_attach_controller", 00:04:00.234 "bdev_virtio_scsi_get_devices", 00:04:00.234 "bdev_virtio_detach_controller", 00:04:00.234 "bdev_virtio_blk_set_hotplug", 00:04:00.234 "bdev_iscsi_delete", 00:04:00.234 "bdev_iscsi_create", 00:04:00.234 "bdev_iscsi_set_options", 00:04:00.235 "accel_error_inject_error", 00:04:00.235 "ioat_scan_accel_module", 00:04:00.235 "dsa_scan_accel_module", 00:04:00.235 "iaa_scan_accel_module", 00:04:00.235 "vfu_virtio_create_scsi_endpoint", 00:04:00.235 "vfu_virtio_scsi_remove_target", 00:04:00.235 "vfu_virtio_scsi_add_target", 00:04:00.235 "vfu_virtio_create_blk_endpoint", 00:04:00.235 "vfu_virtio_delete_endpoint", 00:04:00.235 "keyring_file_remove_key", 00:04:00.235 "keyring_file_add_key", 00:04:00.235 "iscsi_set_options", 00:04:00.235 "iscsi_get_auth_groups", 00:04:00.235 "iscsi_auth_group_remove_secret", 00:04:00.235 "iscsi_auth_group_add_secret", 00:04:00.235 "iscsi_delete_auth_group", 00:04:00.235 "iscsi_create_auth_group", 00:04:00.235 "iscsi_set_discovery_auth", 00:04:00.235 "iscsi_get_options", 00:04:00.235 "iscsi_target_node_request_logout", 00:04:00.235 "iscsi_target_node_set_redirect", 00:04:00.235 "iscsi_target_node_set_auth", 00:04:00.235 "iscsi_target_node_add_lun", 00:04:00.235 "iscsi_get_stats", 00:04:00.235 "iscsi_get_connections", 00:04:00.235 "iscsi_portal_group_set_auth", 00:04:00.235 "iscsi_start_portal_group", 00:04:00.235 "iscsi_delete_portal_group", 00:04:00.235 "iscsi_create_portal_group", 00:04:00.235 "iscsi_get_portal_groups", 00:04:00.235 "iscsi_delete_target_node", 00:04:00.235 "iscsi_target_node_remove_pg_ig_maps", 00:04:00.235 "iscsi_target_node_add_pg_ig_maps", 00:04:00.235 "iscsi_create_target_node", 00:04:00.235 "iscsi_get_target_nodes", 00:04:00.235 "iscsi_delete_initiator_group", 00:04:00.235 "iscsi_initiator_group_remove_initiators", 00:04:00.235 "iscsi_initiator_group_add_initiators", 00:04:00.235 "iscsi_create_initiator_group", 00:04:00.235 "iscsi_get_initiator_groups", 00:04:00.235 "nvmf_set_crdt", 00:04:00.235 "nvmf_set_config", 00:04:00.235 "nvmf_set_max_subsystems", 00:04:00.235 "nvmf_subsystem_get_listeners", 00:04:00.235 "nvmf_subsystem_get_qpairs", 00:04:00.235 "nvmf_subsystem_get_controllers", 00:04:00.235 "nvmf_get_stats", 00:04:00.235 "nvmf_get_transports", 00:04:00.235 "nvmf_create_transport", 00:04:00.235 "nvmf_get_targets", 00:04:00.235 "nvmf_delete_target", 00:04:00.235 "nvmf_create_target", 00:04:00.235 "nvmf_subsystem_allow_any_host", 00:04:00.235 "nvmf_subsystem_remove_host", 00:04:00.235 "nvmf_subsystem_add_host", 00:04:00.235 "nvmf_ns_remove_host", 00:04:00.235 "nvmf_ns_add_host", 00:04:00.235 "nvmf_subsystem_remove_ns", 00:04:00.235 "nvmf_subsystem_add_ns", 00:04:00.235 "nvmf_subsystem_listener_set_ana_state", 00:04:00.235 "nvmf_discovery_get_referrals", 00:04:00.235 "nvmf_discovery_remove_referral", 00:04:00.235 "nvmf_discovery_add_referral", 00:04:00.235 "nvmf_subsystem_remove_listener", 00:04:00.235 "nvmf_subsystem_add_listener", 00:04:00.235 "nvmf_delete_subsystem", 00:04:00.235 "nvmf_create_subsystem", 00:04:00.235 "nvmf_get_subsystems", 00:04:00.235 "env_dpdk_get_mem_stats", 00:04:00.235 "nbd_get_disks", 00:04:00.235 "nbd_stop_disk", 00:04:00.235 "nbd_start_disk", 00:04:00.235 "ublk_recover_disk", 00:04:00.235 "ublk_get_disks", 00:04:00.235 "ublk_stop_disk", 00:04:00.235 "ublk_start_disk", 00:04:00.235 "ublk_destroy_target", 00:04:00.235 "ublk_create_target", 00:04:00.235 "virtio_blk_create_transport", 00:04:00.235 "virtio_blk_get_transports", 00:04:00.235 "vhost_controller_set_coalescing", 00:04:00.235 "vhost_get_controllers", 00:04:00.235 "vhost_delete_controller", 00:04:00.235 "vhost_create_blk_controller", 00:04:00.235 "vhost_scsi_controller_remove_target", 00:04:00.235 "vhost_scsi_controller_add_target", 00:04:00.235 "vhost_start_scsi_controller", 00:04:00.235 "vhost_create_scsi_controller", 00:04:00.235 "thread_set_cpumask", 00:04:00.235 "framework_get_scheduler", 00:04:00.235 "framework_set_scheduler", 00:04:00.235 "framework_get_reactors", 00:04:00.235 "thread_get_io_channels", 00:04:00.235 "thread_get_pollers", 00:04:00.235 "thread_get_stats", 00:04:00.235 "framework_monitor_context_switch", 00:04:00.235 "spdk_kill_instance", 00:04:00.235 "log_enable_timestamps", 00:04:00.235 "log_get_flags", 00:04:00.235 "log_clear_flag", 00:04:00.235 "log_set_flag", 00:04:00.235 "log_get_level", 00:04:00.235 "log_set_level", 00:04:00.235 "log_get_print_level", 00:04:00.235 "log_set_print_level", 00:04:00.235 "framework_enable_cpumask_locks", 00:04:00.235 "framework_disable_cpumask_locks", 00:04:00.235 "framework_wait_init", 00:04:00.235 "framework_start_init", 00:04:00.235 "scsi_get_devices", 00:04:00.235 "bdev_get_histogram", 00:04:00.235 "bdev_enable_histogram", 00:04:00.235 "bdev_set_qos_limit", 00:04:00.235 "bdev_set_qd_sampling_period", 00:04:00.235 "bdev_get_bdevs", 00:04:00.235 "bdev_reset_iostat", 00:04:00.235 "bdev_get_iostat", 00:04:00.235 "bdev_examine", 00:04:00.235 "bdev_wait_for_examine", 00:04:00.235 "bdev_set_options", 00:04:00.235 "notify_get_notifications", 00:04:00.235 "notify_get_types", 00:04:00.235 "accel_get_stats", 00:04:00.235 "accel_set_options", 00:04:00.235 "accel_set_driver", 00:04:00.235 "accel_crypto_key_destroy", 00:04:00.235 "accel_crypto_keys_get", 00:04:00.235 "accel_crypto_key_create", 00:04:00.235 "accel_assign_opc", 00:04:00.235 "accel_get_module_info", 00:04:00.235 "accel_get_opc_assignments", 00:04:00.235 "vmd_rescan", 00:04:00.235 "vmd_remove_device", 00:04:00.235 "vmd_enable", 00:04:00.235 "sock_set_default_impl", 00:04:00.235 "sock_impl_set_options", 00:04:00.235 "sock_impl_get_options", 00:04:00.235 "iobuf_get_stats", 00:04:00.235 "iobuf_set_options", 00:04:00.235 "keyring_get_keys", 00:04:00.235 "framework_get_pci_devices", 00:04:00.235 "framework_get_config", 00:04:00.235 "framework_get_subsystems", 00:04:00.235 "vfu_tgt_set_base_path", 00:04:00.235 "trace_get_info", 00:04:00.235 "trace_get_tpoint_group_mask", 00:04:00.235 "trace_disable_tpoint_group", 00:04:00.235 "trace_enable_tpoint_group", 00:04:00.235 "trace_clear_tpoint_mask", 00:04:00.235 "trace_set_tpoint_mask", 00:04:00.235 "spdk_get_version", 00:04:00.235 "rpc_get_methods" 00:04:00.235 ] 00:04:00.235 21:18:25 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:00.235 21:18:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:00.235 21:18:25 -- common/autotest_common.sh@10 -- # set +x 00:04:00.235 21:18:25 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:00.235 21:18:25 -- spdkcli/tcp.sh@38 -- # killprocess 2485603 00:04:00.235 21:18:25 -- common/autotest_common.sh@936 -- # '[' -z 2485603 ']' 00:04:00.235 21:18:25 -- common/autotest_common.sh@940 -- # kill -0 2485603 00:04:00.235 21:18:25 -- common/autotest_common.sh@941 -- # uname 00:04:00.235 21:18:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:00.235 21:18:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2485603 00:04:00.235 21:18:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:00.235 21:18:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:00.235 21:18:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2485603' 00:04:00.235 killing process with pid 2485603 00:04:00.235 21:18:25 -- common/autotest_common.sh@955 -- # kill 2485603 00:04:00.235 21:18:25 -- common/autotest_common.sh@960 -- # wait 2485603 00:04:00.838 00:04:00.838 real 0m1.299s 00:04:00.838 user 0m2.284s 00:04:00.838 sys 0m0.434s 00:04:00.838 21:18:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:00.838 21:18:26 -- common/autotest_common.sh@10 -- # set +x 00:04:00.838 ************************************ 00:04:00.838 END TEST spdkcli_tcp 00:04:00.838 ************************************ 00:04:00.838 21:18:26 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:00.838 21:18:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.838 21:18:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.838 21:18:26 -- common/autotest_common.sh@10 -- # set +x 00:04:00.838 ************************************ 00:04:00.838 START TEST dpdk_mem_utility 00:04:00.838 ************************************ 00:04:00.838 21:18:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:00.838 * Looking for test storage... 00:04:00.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:00.838 21:18:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:00.838 21:18:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2485815 00:04:00.838 21:18:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.838 21:18:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2485815 00:04:00.838 21:18:26 -- common/autotest_common.sh@817 -- # '[' -z 2485815 ']' 00:04:00.838 21:18:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.838 21:18:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:00.838 21:18:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.838 21:18:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:00.838 21:18:26 -- common/autotest_common.sh@10 -- # set +x 00:04:01.096 [2024-04-24 21:18:26.520734] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:01.096 [2024-04-24 21:18:26.520815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2485815 ] 00:04:01.096 EAL: No free 2048 kB hugepages reported on node 1 00:04:01.096 [2024-04-24 21:18:26.584324] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.096 [2024-04-24 21:18:26.708849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.354 21:18:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:01.354 21:18:26 -- common/autotest_common.sh@850 -- # return 0 00:04:01.354 21:18:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:01.354 21:18:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:01.354 21:18:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:01.354 21:18:26 -- common/autotest_common.sh@10 -- # set +x 00:04:01.354 { 00:04:01.354 "filename": "/tmp/spdk_mem_dump.txt" 00:04:01.354 } 00:04:01.354 21:18:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:01.354 21:18:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:01.612 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:01.612 1 heaps totaling size 814.000000 MiB 00:04:01.612 size: 814.000000 MiB heap id: 0 00:04:01.612 end heaps---------- 00:04:01.612 8 mempools totaling size 598.116089 MiB 00:04:01.612 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:01.612 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:01.612 size: 84.521057 MiB name: bdev_io_2485815 00:04:01.612 size: 51.011292 MiB name: evtpool_2485815 00:04:01.612 size: 50.003479 MiB name: msgpool_2485815 00:04:01.612 size: 21.763794 MiB name: PDU_Pool 00:04:01.612 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:01.612 size: 0.026123 MiB name: Session_Pool 00:04:01.612 end mempools------- 00:04:01.613 6 memzones totaling size 4.142822 MiB 00:04:01.613 size: 1.000366 MiB name: RG_ring_0_2485815 00:04:01.613 size: 1.000366 MiB name: RG_ring_1_2485815 00:04:01.613 size: 1.000366 MiB name: RG_ring_4_2485815 00:04:01.613 size: 1.000366 MiB name: RG_ring_5_2485815 00:04:01.613 size: 0.125366 MiB name: RG_ring_2_2485815 00:04:01.613 size: 0.015991 MiB name: RG_ring_3_2485815 00:04:01.613 end memzones------- 00:04:01.613 21:18:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:01.613 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:01.613 list of free elements. size: 12.519348 MiB 00:04:01.613 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:01.613 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:01.613 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:01.613 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:01.613 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:01.613 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:01.613 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:01.613 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:01.613 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:01.613 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:01.613 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:01.613 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:01.613 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:01.613 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:01.613 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:01.613 list of standard malloc elements. size: 199.218079 MiB 00:04:01.613 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:01.613 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:01.613 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:01.613 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:01.613 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:01.613 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:01.613 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:01.613 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:01.613 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:01.613 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:01.613 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:01.613 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:01.613 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:01.613 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:01.613 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:01.613 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:01.613 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:01.613 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:01.613 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:01.613 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:01.613 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:01.613 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:01.613 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:01.613 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:01.613 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:01.613 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:01.613 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:01.613 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:01.613 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:01.613 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:01.613 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:01.613 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:01.613 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:01.613 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:01.613 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:01.613 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:01.613 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:01.613 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:01.613 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:01.613 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:01.613 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:01.613 list of memzone associated elements. size: 602.262573 MiB 00:04:01.613 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:01.613 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:01.613 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:01.613 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:01.613 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:01.613 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2485815_0 00:04:01.613 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:01.613 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2485815_0 00:04:01.613 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:01.613 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2485815_0 00:04:01.613 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:01.613 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:01.613 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:01.613 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:01.613 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:01.613 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2485815 00:04:01.613 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:01.613 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2485815 00:04:01.613 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:01.613 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2485815 00:04:01.613 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:01.613 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:01.613 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:01.613 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:01.613 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:01.613 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:01.613 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:01.613 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:01.613 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:01.613 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2485815 00:04:01.613 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:01.613 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2485815 00:04:01.613 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:01.613 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2485815 00:04:01.613 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:01.613 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2485815 00:04:01.613 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:01.613 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2485815 00:04:01.613 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:01.613 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:01.613 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:01.613 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:01.613 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:01.613 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:01.613 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:01.613 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2485815 00:04:01.613 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:01.613 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:01.613 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:01.613 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:01.613 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:01.613 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2485815 00:04:01.613 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:01.613 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:01.613 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:01.613 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2485815 00:04:01.613 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:01.613 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2485815 00:04:01.613 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:01.613 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:01.613 21:18:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:01.613 21:18:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2485815 00:04:01.613 21:18:27 -- common/autotest_common.sh@936 -- # '[' -z 2485815 ']' 00:04:01.613 21:18:27 -- common/autotest_common.sh@940 -- # kill -0 2485815 00:04:01.613 21:18:27 -- common/autotest_common.sh@941 -- # uname 00:04:01.613 21:18:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:01.614 21:18:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2485815 00:04:01.614 21:18:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:01.614 21:18:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:01.614 21:18:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2485815' 00:04:01.614 killing process with pid 2485815 00:04:01.614 21:18:27 -- common/autotest_common.sh@955 -- # kill 2485815 00:04:01.614 21:18:27 -- common/autotest_common.sh@960 -- # wait 2485815 00:04:02.179 00:04:02.179 real 0m1.176s 00:04:02.179 user 0m1.145s 00:04:02.179 sys 0m0.425s 00:04:02.179 21:18:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:02.179 21:18:27 -- common/autotest_common.sh@10 -- # set +x 00:04:02.179 ************************************ 00:04:02.179 END TEST dpdk_mem_utility 00:04:02.179 ************************************ 00:04:02.179 21:18:27 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:02.179 21:18:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.179 21:18:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.179 21:18:27 -- common/autotest_common.sh@10 -- # set +x 00:04:02.179 ************************************ 00:04:02.179 START TEST event 00:04:02.179 ************************************ 00:04:02.179 21:18:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:02.179 * Looking for test storage... 00:04:02.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:02.179 21:18:27 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:02.179 21:18:27 -- bdev/nbd_common.sh@6 -- # set -e 00:04:02.179 21:18:27 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:02.179 21:18:27 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:02.179 21:18:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.179 21:18:27 -- common/autotest_common.sh@10 -- # set +x 00:04:02.179 ************************************ 00:04:02.179 START TEST event_perf 00:04:02.179 ************************************ 00:04:02.179 21:18:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:02.436 Running I/O for 1 seconds...[2024-04-24 21:18:27.858298] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:02.436 [2024-04-24 21:18:27.858389] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2486140 ] 00:04:02.436 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.436 [2024-04-24 21:18:27.931091] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:02.436 [2024-04-24 21:18:28.058352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:02.436 [2024-04-24 21:18:28.058412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:02.436 [2024-04-24 21:18:28.058480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:02.436 [2024-04-24 21:18:28.058483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.807 Running I/O for 1 seconds... 00:04:03.807 lcore 0: 240978 00:04:03.807 lcore 1: 240978 00:04:03.807 lcore 2: 240978 00:04:03.807 lcore 3: 240977 00:04:03.807 done. 00:04:03.807 00:04:03.807 real 0m1.324s 00:04:03.807 user 0m4.221s 00:04:03.807 sys 0m0.098s 00:04:03.807 21:18:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:03.807 21:18:29 -- common/autotest_common.sh@10 -- # set +x 00:04:03.807 ************************************ 00:04:03.807 END TEST event_perf 00:04:03.807 ************************************ 00:04:03.807 21:18:29 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:03.807 21:18:29 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:03.807 21:18:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:03.807 21:18:29 -- common/autotest_common.sh@10 -- # set +x 00:04:03.807 ************************************ 00:04:03.807 START TEST event_reactor 00:04:03.807 ************************************ 00:04:03.807 21:18:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:03.807 [2024-04-24 21:18:29.306916] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:03.807 [2024-04-24 21:18:29.306997] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2486309 ] 00:04:03.807 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.807 [2024-04-24 21:18:29.369962] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.064 [2024-04-24 21:18:29.487672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.999 test_start 00:04:04.999 oneshot 00:04:04.999 tick 100 00:04:04.999 tick 100 00:04:04.999 tick 250 00:04:04.999 tick 100 00:04:04.999 tick 100 00:04:04.999 tick 100 00:04:04.999 tick 250 00:04:04.999 tick 500 00:04:04.999 tick 100 00:04:04.999 tick 100 00:04:04.999 tick 250 00:04:04.999 tick 100 00:04:04.999 tick 100 00:04:04.999 test_end 00:04:04.999 00:04:04.999 real 0m1.318s 00:04:04.999 user 0m1.226s 00:04:04.999 sys 0m0.087s 00:04:04.999 21:18:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:04.999 21:18:30 -- common/autotest_common.sh@10 -- # set +x 00:04:04.999 ************************************ 00:04:04.999 END TEST event_reactor 00:04:04.999 ************************************ 00:04:04.999 21:18:30 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:04.999 21:18:30 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:04.999 21:18:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.999 21:18:30 -- common/autotest_common.sh@10 -- # set +x 00:04:05.257 ************************************ 00:04:05.257 START TEST event_reactor_perf 00:04:05.257 ************************************ 00:04:05.257 21:18:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:05.257 [2024-04-24 21:18:30.748073] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:05.257 [2024-04-24 21:18:30.748141] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2486469 ] 00:04:05.257 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.257 [2024-04-24 21:18:30.813113] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.257 [2024-04-24 21:18:30.929134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.630 test_start 00:04:06.630 test_end 00:04:06.630 Performance: 356700 events per second 00:04:06.630 00:04:06.630 real 0m1.318s 00:04:06.631 user 0m1.233s 00:04:06.631 sys 0m0.080s 00:04:06.631 21:18:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:06.631 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:06.631 ************************************ 00:04:06.631 END TEST event_reactor_perf 00:04:06.631 ************************************ 00:04:06.631 21:18:32 -- event/event.sh@49 -- # uname -s 00:04:06.631 21:18:32 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:06.631 21:18:32 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:06.631 21:18:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:06.631 21:18:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.631 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:06.631 ************************************ 00:04:06.631 START TEST event_scheduler 00:04:06.631 ************************************ 00:04:06.631 21:18:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:06.631 * Looking for test storage... 00:04:06.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:06.631 21:18:32 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:06.631 21:18:32 -- scheduler/scheduler.sh@35 -- # scheduler_pid=2486716 00:04:06.631 21:18:32 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:06.631 21:18:32 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.631 21:18:32 -- scheduler/scheduler.sh@37 -- # waitforlisten 2486716 00:04:06.631 21:18:32 -- common/autotest_common.sh@817 -- # '[' -z 2486716 ']' 00:04:06.631 21:18:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.631 21:18:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:06.631 21:18:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.631 21:18:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:06.631 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:06.631 [2024-04-24 21:18:32.274284] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:06.631 [2024-04-24 21:18:32.274362] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2486716 ] 00:04:06.631 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.890 [2024-04-24 21:18:32.337151] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:06.890 [2024-04-24 21:18:32.445100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.890 [2024-04-24 21:18:32.445157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:06.890 [2024-04-24 21:18:32.445222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:06.890 [2024-04-24 21:18:32.445225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:06.890 21:18:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:06.890 21:18:32 -- common/autotest_common.sh@850 -- # return 0 00:04:06.890 21:18:32 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:06.890 21:18:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:06.890 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:06.890 POWER: Env isn't set yet! 00:04:06.890 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:06.890 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:04:06.890 POWER: Cannot get available frequencies of lcore 0 00:04:06.890 POWER: Attempting to initialise PSTAT power management... 00:04:06.890 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:06.890 POWER: Initialized successfully for lcore 0 power management 00:04:06.890 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:06.890 POWER: Initialized successfully for lcore 1 power management 00:04:06.890 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:06.890 POWER: Initialized successfully for lcore 2 power management 00:04:06.890 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:06.890 POWER: Initialized successfully for lcore 3 power management 00:04:06.890 21:18:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:06.890 21:18:32 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:06.890 21:18:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:06.890 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:07.149 [2024-04-24 21:18:32.623818] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:07.149 21:18:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:07.149 21:18:32 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:07.149 21:18:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.149 21:18:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.149 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:07.149 ************************************ 00:04:07.149 START TEST scheduler_create_thread 00:04:07.149 ************************************ 00:04:07.149 21:18:32 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:04:07.149 21:18:32 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:07.149 21:18:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:07.149 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:07.149 2 00:04:07.149 21:18:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:07.149 21:18:32 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:07.149 21:18:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:07.149 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:07.149 3 00:04:07.149 21:18:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:07.149 21:18:32 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:07.149 21:18:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:07.149 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:07.149 4 00:04:07.149 21:18:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:07.149 21:18:32 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:07.149 21:18:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:07.149 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:07.149 5 00:04:07.149 21:18:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:07.149 21:18:32 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:07.149 21:18:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:07.149 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:07.149 6 00:04:07.149 21:18:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:07.149 21:18:32 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:07.149 21:18:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:07.149 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:07.149 7 00:04:07.149 21:18:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:07.149 21:18:32 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:07.149 21:18:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:07.149 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:07.149 8 00:04:07.149 21:18:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:07.149 21:18:32 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:07.149 21:18:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:07.149 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:07.149 9 00:04:07.149 21:18:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:07.149 21:18:32 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:07.149 21:18:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:07.149 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:07.149 10 00:04:07.149 21:18:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:07.149 21:18:32 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:07.149 21:18:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:07.149 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:07.149 21:18:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:07.149 21:18:32 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:07.149 21:18:32 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:07.149 21:18:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:07.149 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:07.407 21:18:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:07.407 21:18:32 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:07.407 21:18:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:07.407 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:08.338 21:18:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.338 21:18:33 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:08.339 21:18:33 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:08.339 21:18:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.339 21:18:33 -- common/autotest_common.sh@10 -- # set +x 00:04:09.271 21:18:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.271 00:04:09.271 real 0m2.136s 00:04:09.271 user 0m0.013s 00:04:09.271 sys 0m0.002s 00:04:09.271 21:18:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:09.271 21:18:34 -- common/autotest_common.sh@10 -- # set +x 00:04:09.271 ************************************ 00:04:09.271 END TEST scheduler_create_thread 00:04:09.271 ************************************ 00:04:09.271 21:18:34 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:09.271 21:18:34 -- scheduler/scheduler.sh@46 -- # killprocess 2486716 00:04:09.271 21:18:34 -- common/autotest_common.sh@936 -- # '[' -z 2486716 ']' 00:04:09.271 21:18:34 -- common/autotest_common.sh@940 -- # kill -0 2486716 00:04:09.271 21:18:34 -- common/autotest_common.sh@941 -- # uname 00:04:09.271 21:18:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:09.271 21:18:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2486716 00:04:09.271 21:18:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:09.271 21:18:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:09.271 21:18:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2486716' 00:04:09.271 killing process with pid 2486716 00:04:09.271 21:18:34 -- common/autotest_common.sh@955 -- # kill 2486716 00:04:09.271 21:18:34 -- common/autotest_common.sh@960 -- # wait 2486716 00:04:09.838 [2024-04-24 21:18:35.345569] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:09.838 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:09.838 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:09.838 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:04:09.838 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:09.838 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:04:09.838 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:09.838 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:04:09.838 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:10.097 00:04:10.097 real 0m3.426s 00:04:10.097 user 0m4.832s 00:04:10.097 sys 0m0.391s 00:04:10.097 21:18:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:10.097 21:18:35 -- common/autotest_common.sh@10 -- # set +x 00:04:10.097 ************************************ 00:04:10.097 END TEST event_scheduler 00:04:10.097 ************************************ 00:04:10.097 21:18:35 -- event/event.sh@51 -- # modprobe -n nbd 00:04:10.097 21:18:35 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:10.097 21:18:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:10.097 21:18:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.097 21:18:35 -- common/autotest_common.sh@10 -- # set +x 00:04:10.097 ************************************ 00:04:10.097 START TEST app_repeat 00:04:10.097 ************************************ 00:04:10.097 21:18:35 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:04:10.097 21:18:35 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:10.097 21:18:35 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:10.097 21:18:35 -- event/event.sh@13 -- # local nbd_list 00:04:10.097 21:18:35 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:10.097 21:18:35 -- event/event.sh@14 -- # local bdev_list 00:04:10.097 21:18:35 -- event/event.sh@15 -- # local repeat_times=4 00:04:10.097 21:18:35 -- event/event.sh@17 -- # modprobe nbd 00:04:10.097 21:18:35 -- event/event.sh@19 -- # repeat_pid=2487246 00:04:10.097 21:18:35 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:10.097 21:18:35 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.097 21:18:35 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2487246' 00:04:10.097 Process app_repeat pid: 2487246 00:04:10.097 21:18:35 -- event/event.sh@23 -- # for i in {0..2} 00:04:10.097 21:18:35 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:10.097 spdk_app_start Round 0 00:04:10.097 21:18:35 -- event/event.sh@25 -- # waitforlisten 2487246 /var/tmp/spdk-nbd.sock 00:04:10.097 21:18:35 -- common/autotest_common.sh@817 -- # '[' -z 2487246 ']' 00:04:10.097 21:18:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:10.097 21:18:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:10.097 21:18:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:10.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:10.097 21:18:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:10.097 21:18:35 -- common/autotest_common.sh@10 -- # set +x 00:04:10.097 [2024-04-24 21:18:35.752327] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:10.097 [2024-04-24 21:18:35.752391] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2487246 ] 00:04:10.356 EAL: No free 2048 kB hugepages reported on node 1 00:04:10.356 [2024-04-24 21:18:35.812743] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:10.356 [2024-04-24 21:18:35.921136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.356 [2024-04-24 21:18:35.921140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.356 21:18:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:10.356 21:18:36 -- common/autotest_common.sh@850 -- # return 0 00:04:10.356 21:18:36 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:10.615 Malloc0 00:04:10.615 21:18:36 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:10.874 Malloc1 00:04:10.874 21:18:36 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:10.874 21:18:36 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:10.874 21:18:36 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:10.874 21:18:36 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:10.874 21:18:36 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:10.874 21:18:36 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:10.874 21:18:36 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:10.874 21:18:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:10.874 21:18:36 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:10.874 21:18:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:10.874 21:18:36 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:10.874 21:18:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:10.874 21:18:36 -- bdev/nbd_common.sh@12 -- # local i 00:04:10.874 21:18:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:10.874 21:18:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:10.874 21:18:36 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:11.132 /dev/nbd0 00:04:11.389 21:18:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:11.389 21:18:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:11.389 21:18:36 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:11.389 21:18:36 -- common/autotest_common.sh@855 -- # local i 00:04:11.389 21:18:36 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:11.389 21:18:36 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:11.389 21:18:36 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:11.389 21:18:36 -- common/autotest_common.sh@859 -- # break 00:04:11.389 21:18:36 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:11.389 21:18:36 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:11.389 21:18:36 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:11.389 1+0 records in 00:04:11.389 1+0 records out 00:04:11.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000172305 s, 23.8 MB/s 00:04:11.389 21:18:36 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:11.389 21:18:36 -- common/autotest_common.sh@872 -- # size=4096 00:04:11.389 21:18:36 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:11.389 21:18:36 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:11.389 21:18:36 -- common/autotest_common.sh@875 -- # return 0 00:04:11.389 21:18:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:11.389 21:18:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:11.389 21:18:36 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:11.647 /dev/nbd1 00:04:11.647 21:18:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:11.647 21:18:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:11.647 21:18:37 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:11.647 21:18:37 -- common/autotest_common.sh@855 -- # local i 00:04:11.647 21:18:37 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:11.647 21:18:37 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:11.647 21:18:37 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:11.647 21:18:37 -- common/autotest_common.sh@859 -- # break 00:04:11.647 21:18:37 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:11.647 21:18:37 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:11.647 21:18:37 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:11.647 1+0 records in 00:04:11.647 1+0 records out 00:04:11.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240652 s, 17.0 MB/s 00:04:11.647 21:18:37 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:11.647 21:18:37 -- common/autotest_common.sh@872 -- # size=4096 00:04:11.647 21:18:37 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:11.647 21:18:37 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:11.647 21:18:37 -- common/autotest_common.sh@875 -- # return 0 00:04:11.647 21:18:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:11.647 21:18:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:11.647 21:18:37 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:11.647 21:18:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.647 21:18:37 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:11.904 21:18:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:11.904 { 00:04:11.904 "nbd_device": "/dev/nbd0", 00:04:11.904 "bdev_name": "Malloc0" 00:04:11.904 }, 00:04:11.904 { 00:04:11.904 "nbd_device": "/dev/nbd1", 00:04:11.904 "bdev_name": "Malloc1" 00:04:11.904 } 00:04:11.904 ]' 00:04:11.904 21:18:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:11.904 { 00:04:11.904 "nbd_device": "/dev/nbd0", 00:04:11.904 "bdev_name": "Malloc0" 00:04:11.904 }, 00:04:11.904 { 00:04:11.904 "nbd_device": "/dev/nbd1", 00:04:11.904 "bdev_name": "Malloc1" 00:04:11.904 } 00:04:11.904 ]' 00:04:11.904 21:18:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:11.904 21:18:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:11.904 /dev/nbd1' 00:04:11.904 21:18:37 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:11.904 /dev/nbd1' 00:04:11.904 21:18:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:11.904 21:18:37 -- bdev/nbd_common.sh@65 -- # count=2 00:04:11.904 21:18:37 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:11.904 21:18:37 -- bdev/nbd_common.sh@95 -- # count=2 00:04:11.904 21:18:37 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:11.904 21:18:37 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:11.904 21:18:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.904 21:18:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:11.905 256+0 records in 00:04:11.905 256+0 records out 00:04:11.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501932 s, 209 MB/s 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:11.905 256+0 records in 00:04:11.905 256+0 records out 00:04:11.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238969 s, 43.9 MB/s 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:11.905 256+0 records in 00:04:11.905 256+0 records out 00:04:11.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249114 s, 42.1 MB/s 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@51 -- # local i 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:11.905 21:18:37 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:12.162 21:18:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:12.162 21:18:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:12.162 21:18:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:12.162 21:18:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:12.162 21:18:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:12.162 21:18:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:12.162 21:18:37 -- bdev/nbd_common.sh@41 -- # break 00:04:12.162 21:18:37 -- bdev/nbd_common.sh@45 -- # return 0 00:04:12.162 21:18:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:12.162 21:18:37 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:12.420 21:18:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:12.420 21:18:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:12.420 21:18:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:12.420 21:18:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:12.420 21:18:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:12.420 21:18:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:12.420 21:18:38 -- bdev/nbd_common.sh@41 -- # break 00:04:12.420 21:18:38 -- bdev/nbd_common.sh@45 -- # return 0 00:04:12.420 21:18:38 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:12.420 21:18:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.420 21:18:38 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:12.678 21:18:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:12.678 21:18:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:12.678 21:18:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:12.678 21:18:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:12.678 21:18:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:12.678 21:18:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:12.678 21:18:38 -- bdev/nbd_common.sh@65 -- # true 00:04:12.678 21:18:38 -- bdev/nbd_common.sh@65 -- # count=0 00:04:12.678 21:18:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:12.678 21:18:38 -- bdev/nbd_common.sh@104 -- # count=0 00:04:12.678 21:18:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:12.678 21:18:38 -- bdev/nbd_common.sh@109 -- # return 0 00:04:12.678 21:18:38 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:12.938 21:18:38 -- event/event.sh@35 -- # sleep 3 00:04:13.202 [2024-04-24 21:18:38.839443] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:13.460 [2024-04-24 21:18:38.953963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.460 [2024-04-24 21:18:38.953964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.460 [2024-04-24 21:18:39.011762] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:13.460 [2024-04-24 21:18:39.011824] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:15.988 21:18:41 -- event/event.sh@23 -- # for i in {0..2} 00:04:15.988 21:18:41 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:15.988 spdk_app_start Round 1 00:04:15.988 21:18:41 -- event/event.sh@25 -- # waitforlisten 2487246 /var/tmp/spdk-nbd.sock 00:04:15.988 21:18:41 -- common/autotest_common.sh@817 -- # '[' -z 2487246 ']' 00:04:15.988 21:18:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:15.988 21:18:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:15.988 21:18:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:15.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:15.988 21:18:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:15.988 21:18:41 -- common/autotest_common.sh@10 -- # set +x 00:04:16.246 21:18:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:16.246 21:18:41 -- common/autotest_common.sh@850 -- # return 0 00:04:16.246 21:18:41 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:16.504 Malloc0 00:04:16.504 21:18:42 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:16.762 Malloc1 00:04:16.762 21:18:42 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:16.762 21:18:42 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.762 21:18:42 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:16.762 21:18:42 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:16.762 21:18:42 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.762 21:18:42 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:16.762 21:18:42 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:16.762 21:18:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.762 21:18:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:16.762 21:18:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:16.762 21:18:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.762 21:18:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:16.762 21:18:42 -- bdev/nbd_common.sh@12 -- # local i 00:04:16.762 21:18:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:16.762 21:18:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.762 21:18:42 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:17.020 /dev/nbd0 00:04:17.020 21:18:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:17.020 21:18:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:17.020 21:18:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:17.020 21:18:42 -- common/autotest_common.sh@855 -- # local i 00:04:17.020 21:18:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:17.020 21:18:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:17.020 21:18:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:17.020 21:18:42 -- common/autotest_common.sh@859 -- # break 00:04:17.020 21:18:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:17.020 21:18:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:17.020 21:18:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:17.020 1+0 records in 00:04:17.020 1+0 records out 00:04:17.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213504 s, 19.2 MB/s 00:04:17.020 21:18:42 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:17.020 21:18:42 -- common/autotest_common.sh@872 -- # size=4096 00:04:17.020 21:18:42 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:17.020 21:18:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:17.020 21:18:42 -- common/autotest_common.sh@875 -- # return 0 00:04:17.020 21:18:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:17.021 21:18:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:17.021 21:18:42 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:17.279 /dev/nbd1 00:04:17.279 21:18:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:17.279 21:18:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:17.279 21:18:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:17.279 21:18:42 -- common/autotest_common.sh@855 -- # local i 00:04:17.279 21:18:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:17.279 21:18:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:17.279 21:18:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:17.279 21:18:42 -- common/autotest_common.sh@859 -- # break 00:04:17.279 21:18:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:17.279 21:18:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:17.279 21:18:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:17.279 1+0 records in 00:04:17.279 1+0 records out 00:04:17.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213427 s, 19.2 MB/s 00:04:17.279 21:18:42 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:17.279 21:18:42 -- common/autotest_common.sh@872 -- # size=4096 00:04:17.279 21:18:42 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:17.279 21:18:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:17.279 21:18:42 -- common/autotest_common.sh@875 -- # return 0 00:04:17.279 21:18:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:17.279 21:18:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:17.279 21:18:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:17.279 21:18:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.279 21:18:42 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:17.541 { 00:04:17.541 "nbd_device": "/dev/nbd0", 00:04:17.541 "bdev_name": "Malloc0" 00:04:17.541 }, 00:04:17.541 { 00:04:17.541 "nbd_device": "/dev/nbd1", 00:04:17.541 "bdev_name": "Malloc1" 00:04:17.541 } 00:04:17.541 ]' 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:17.541 { 00:04:17.541 "nbd_device": "/dev/nbd0", 00:04:17.541 "bdev_name": "Malloc0" 00:04:17.541 }, 00:04:17.541 { 00:04:17.541 "nbd_device": "/dev/nbd1", 00:04:17.541 "bdev_name": "Malloc1" 00:04:17.541 } 00:04:17.541 ]' 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:17.541 /dev/nbd1' 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:17.541 /dev/nbd1' 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@65 -- # count=2 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@95 -- # count=2 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:17.541 256+0 records in 00:04:17.541 256+0 records out 00:04:17.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00379238 s, 276 MB/s 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:17.541 256+0 records in 00:04:17.541 256+0 records out 00:04:17.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024233 s, 43.3 MB/s 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:17.541 256+0 records in 00:04:17.541 256+0 records out 00:04:17.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253397 s, 41.4 MB/s 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@51 -- # local i 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:17.541 21:18:43 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:18.107 21:18:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:18.107 21:18:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:18.107 21:18:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:18.107 21:18:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:18.107 21:18:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:18.107 21:18:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:18.107 21:18:43 -- bdev/nbd_common.sh@41 -- # break 00:04:18.107 21:18:43 -- bdev/nbd_common.sh@45 -- # return 0 00:04:18.107 21:18:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:18.107 21:18:43 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:18.107 21:18:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:18.107 21:18:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:18.107 21:18:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:18.107 21:18:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:18.107 21:18:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:18.107 21:18:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:18.107 21:18:43 -- bdev/nbd_common.sh@41 -- # break 00:04:18.107 21:18:43 -- bdev/nbd_common.sh@45 -- # return 0 00:04:18.107 21:18:43 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:18.107 21:18:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.107 21:18:43 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:18.365 21:18:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:18.365 21:18:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:18.365 21:18:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:18.365 21:18:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:18.365 21:18:44 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:18.365 21:18:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:18.365 21:18:44 -- bdev/nbd_common.sh@65 -- # true 00:04:18.365 21:18:44 -- bdev/nbd_common.sh@65 -- # count=0 00:04:18.365 21:18:44 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:18.365 21:18:44 -- bdev/nbd_common.sh@104 -- # count=0 00:04:18.365 21:18:44 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:18.365 21:18:44 -- bdev/nbd_common.sh@109 -- # return 0 00:04:18.365 21:18:44 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:18.931 21:18:44 -- event/event.sh@35 -- # sleep 3 00:04:18.931 [2024-04-24 21:18:44.582934] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:19.189 [2024-04-24 21:18:44.696130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:19.189 [2024-04-24 21:18:44.696137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.189 [2024-04-24 21:18:44.756924] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:19.189 [2024-04-24 21:18:44.757011] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:21.717 21:18:47 -- event/event.sh@23 -- # for i in {0..2} 00:04:21.717 21:18:47 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:21.717 spdk_app_start Round 2 00:04:21.717 21:18:47 -- event/event.sh@25 -- # waitforlisten 2487246 /var/tmp/spdk-nbd.sock 00:04:21.717 21:18:47 -- common/autotest_common.sh@817 -- # '[' -z 2487246 ']' 00:04:21.717 21:18:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:21.717 21:18:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:21.717 21:18:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:21.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:21.717 21:18:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:21.717 21:18:47 -- common/autotest_common.sh@10 -- # set +x 00:04:21.975 21:18:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:21.975 21:18:47 -- common/autotest_common.sh@850 -- # return 0 00:04:21.975 21:18:47 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:22.233 Malloc0 00:04:22.233 21:18:47 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:22.491 Malloc1 00:04:22.491 21:18:48 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:22.491 21:18:48 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.491 21:18:48 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:22.491 21:18:48 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:22.491 21:18:48 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.491 21:18:48 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:22.491 21:18:48 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:22.491 21:18:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.491 21:18:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:22.491 21:18:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:22.491 21:18:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.491 21:18:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:22.491 21:18:48 -- bdev/nbd_common.sh@12 -- # local i 00:04:22.491 21:18:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:22.491 21:18:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:22.491 21:18:48 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:22.749 /dev/nbd0 00:04:22.749 21:18:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:22.749 21:18:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:22.749 21:18:48 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:22.749 21:18:48 -- common/autotest_common.sh@855 -- # local i 00:04:22.749 21:18:48 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:22.749 21:18:48 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:22.749 21:18:48 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:22.749 21:18:48 -- common/autotest_common.sh@859 -- # break 00:04:22.749 21:18:48 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:22.749 21:18:48 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:22.749 21:18:48 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:22.749 1+0 records in 00:04:22.749 1+0 records out 00:04:22.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177789 s, 23.0 MB/s 00:04:22.749 21:18:48 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:22.749 21:18:48 -- common/autotest_common.sh@872 -- # size=4096 00:04:22.749 21:18:48 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:22.749 21:18:48 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:22.749 21:18:48 -- common/autotest_common.sh@875 -- # return 0 00:04:22.749 21:18:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:22.749 21:18:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:22.749 21:18:48 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:23.006 /dev/nbd1 00:04:23.006 21:18:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:23.006 21:18:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:23.006 21:18:48 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:23.006 21:18:48 -- common/autotest_common.sh@855 -- # local i 00:04:23.006 21:18:48 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:23.006 21:18:48 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:23.006 21:18:48 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:23.006 21:18:48 -- common/autotest_common.sh@859 -- # break 00:04:23.006 21:18:48 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:23.006 21:18:48 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:23.006 21:18:48 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.006 1+0 records in 00:04:23.006 1+0 records out 00:04:23.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212071 s, 19.3 MB/s 00:04:23.006 21:18:48 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.006 21:18:48 -- common/autotest_common.sh@872 -- # size=4096 00:04:23.006 21:18:48 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.006 21:18:48 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:23.006 21:18:48 -- common/autotest_common.sh@875 -- # return 0 00:04:23.006 21:18:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.006 21:18:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.006 21:18:48 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:23.006 21:18:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.006 21:18:48 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:23.264 { 00:04:23.264 "nbd_device": "/dev/nbd0", 00:04:23.264 "bdev_name": "Malloc0" 00:04:23.264 }, 00:04:23.264 { 00:04:23.264 "nbd_device": "/dev/nbd1", 00:04:23.264 "bdev_name": "Malloc1" 00:04:23.264 } 00:04:23.264 ]' 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:23.264 { 00:04:23.264 "nbd_device": "/dev/nbd0", 00:04:23.264 "bdev_name": "Malloc0" 00:04:23.264 }, 00:04:23.264 { 00:04:23.264 "nbd_device": "/dev/nbd1", 00:04:23.264 "bdev_name": "Malloc1" 00:04:23.264 } 00:04:23.264 ]' 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:23.264 /dev/nbd1' 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:23.264 /dev/nbd1' 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@65 -- # count=2 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@95 -- # count=2 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:23.264 256+0 records in 00:04:23.264 256+0 records out 00:04:23.264 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00500051 s, 210 MB/s 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:23.264 256+0 records in 00:04:23.264 256+0 records out 00:04:23.264 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239401 s, 43.8 MB/s 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:23.264 256+0 records in 00:04:23.264 256+0 records out 00:04:23.264 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249218 s, 42.1 MB/s 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:23.264 21:18:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.265 21:18:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:23.265 21:18:48 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.522 21:18:48 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:23.522 21:18:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.522 21:18:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.522 21:18:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:23.522 21:18:48 -- bdev/nbd_common.sh@51 -- # local i 00:04:23.522 21:18:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:23.522 21:18:48 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:23.779 21:18:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:23.779 21:18:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:23.779 21:18:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:23.779 21:18:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:23.779 21:18:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:23.779 21:18:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:23.779 21:18:49 -- bdev/nbd_common.sh@41 -- # break 00:04:23.779 21:18:49 -- bdev/nbd_common.sh@45 -- # return 0 00:04:23.779 21:18:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:23.779 21:18:49 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:24.037 21:18:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:24.037 21:18:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:24.037 21:18:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:24.037 21:18:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.037 21:18:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.037 21:18:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:24.037 21:18:49 -- bdev/nbd_common.sh@41 -- # break 00:04:24.037 21:18:49 -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.037 21:18:49 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.037 21:18:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.037 21:18:49 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.295 21:18:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:24.295 21:18:49 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:24.295 21:18:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.295 21:18:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:24.295 21:18:49 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:24.295 21:18:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.295 21:18:49 -- bdev/nbd_common.sh@65 -- # true 00:04:24.295 21:18:49 -- bdev/nbd_common.sh@65 -- # count=0 00:04:24.295 21:18:49 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:24.295 21:18:49 -- bdev/nbd_common.sh@104 -- # count=0 00:04:24.295 21:18:49 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:24.295 21:18:49 -- bdev/nbd_common.sh@109 -- # return 0 00:04:24.295 21:18:49 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:24.553 21:18:50 -- event/event.sh@35 -- # sleep 3 00:04:24.811 [2024-04-24 21:18:50.309522] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:24.811 [2024-04-24 21:18:50.424417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.811 [2024-04-24 21:18:50.424421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.811 [2024-04-24 21:18:50.487616] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:24.811 [2024-04-24 21:18:50.487687] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:28.098 21:18:53 -- event/event.sh@38 -- # waitforlisten 2487246 /var/tmp/spdk-nbd.sock 00:04:28.098 21:18:53 -- common/autotest_common.sh@817 -- # '[' -z 2487246 ']' 00:04:28.098 21:18:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:28.098 21:18:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:28.098 21:18:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:28.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:28.098 21:18:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:28.098 21:18:53 -- common/autotest_common.sh@10 -- # set +x 00:04:28.098 21:18:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:28.098 21:18:53 -- common/autotest_common.sh@850 -- # return 0 00:04:28.098 21:18:53 -- event/event.sh@39 -- # killprocess 2487246 00:04:28.098 21:18:53 -- common/autotest_common.sh@936 -- # '[' -z 2487246 ']' 00:04:28.098 21:18:53 -- common/autotest_common.sh@940 -- # kill -0 2487246 00:04:28.098 21:18:53 -- common/autotest_common.sh@941 -- # uname 00:04:28.099 21:18:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:28.099 21:18:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2487246 00:04:28.099 21:18:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:28.099 21:18:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:28.099 21:18:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2487246' 00:04:28.099 killing process with pid 2487246 00:04:28.099 21:18:53 -- common/autotest_common.sh@955 -- # kill 2487246 00:04:28.099 21:18:53 -- common/autotest_common.sh@960 -- # wait 2487246 00:04:28.099 spdk_app_start is called in Round 0. 00:04:28.099 Shutdown signal received, stop current app iteration 00:04:28.099 Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 reinitialization... 00:04:28.099 spdk_app_start is called in Round 1. 00:04:28.099 Shutdown signal received, stop current app iteration 00:04:28.099 Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 reinitialization... 00:04:28.099 spdk_app_start is called in Round 2. 00:04:28.099 Shutdown signal received, stop current app iteration 00:04:28.099 Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 reinitialization... 00:04:28.099 spdk_app_start is called in Round 3. 00:04:28.099 Shutdown signal received, stop current app iteration 00:04:28.099 21:18:53 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:28.099 21:18:53 -- event/event.sh@42 -- # return 0 00:04:28.099 00:04:28.099 real 0m17.830s 00:04:28.099 user 0m38.374s 00:04:28.099 sys 0m3.173s 00:04:28.099 21:18:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:28.099 21:18:53 -- common/autotest_common.sh@10 -- # set +x 00:04:28.099 ************************************ 00:04:28.099 END TEST app_repeat 00:04:28.099 ************************************ 00:04:28.099 21:18:53 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:28.099 21:18:53 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:28.099 21:18:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:28.099 21:18:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:28.099 21:18:53 -- common/autotest_common.sh@10 -- # set +x 00:04:28.099 ************************************ 00:04:28.099 START TEST cpu_locks 00:04:28.099 ************************************ 00:04:28.099 21:18:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:28.099 * Looking for test storage... 00:04:28.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:28.099 21:18:53 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:28.099 21:18:53 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:28.099 21:18:53 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:28.099 21:18:53 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:28.099 21:18:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:28.099 21:18:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:28.099 21:18:53 -- common/autotest_common.sh@10 -- # set +x 00:04:28.358 ************************************ 00:04:28.358 START TEST default_locks 00:04:28.358 ************************************ 00:04:28.358 21:18:53 -- common/autotest_common.sh@1111 -- # default_locks 00:04:28.358 21:18:53 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2489611 00:04:28.358 21:18:53 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:28.358 21:18:53 -- event/cpu_locks.sh@47 -- # waitforlisten 2489611 00:04:28.358 21:18:53 -- common/autotest_common.sh@817 -- # '[' -z 2489611 ']' 00:04:28.358 21:18:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.358 21:18:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:28.358 21:18:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.358 21:18:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:28.358 21:18:53 -- common/autotest_common.sh@10 -- # set +x 00:04:28.358 [2024-04-24 21:18:53.864580] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:28.358 [2024-04-24 21:18:53.864681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2489611 ] 00:04:28.358 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.358 [2024-04-24 21:18:53.921128] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.358 [2024-04-24 21:18:54.026496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.617 21:18:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:28.617 21:18:54 -- common/autotest_common.sh@850 -- # return 0 00:04:28.617 21:18:54 -- event/cpu_locks.sh@49 -- # locks_exist 2489611 00:04:28.617 21:18:54 -- event/cpu_locks.sh@22 -- # lslocks -p 2489611 00:04:28.617 21:18:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:29.197 lslocks: write error 00:04:29.197 21:18:54 -- event/cpu_locks.sh@50 -- # killprocess 2489611 00:04:29.197 21:18:54 -- common/autotest_common.sh@936 -- # '[' -z 2489611 ']' 00:04:29.197 21:18:54 -- common/autotest_common.sh@940 -- # kill -0 2489611 00:04:29.197 21:18:54 -- common/autotest_common.sh@941 -- # uname 00:04:29.197 21:18:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:29.197 21:18:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2489611 00:04:29.197 21:18:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:29.197 21:18:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:29.197 21:18:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2489611' 00:04:29.197 killing process with pid 2489611 00:04:29.197 21:18:54 -- common/autotest_common.sh@955 -- # kill 2489611 00:04:29.197 21:18:54 -- common/autotest_common.sh@960 -- # wait 2489611 00:04:29.455 21:18:55 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2489611 00:04:29.455 21:18:55 -- common/autotest_common.sh@638 -- # local es=0 00:04:29.455 21:18:55 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2489611 00:04:29.455 21:18:55 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:04:29.455 21:18:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:29.455 21:18:55 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:04:29.455 21:18:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:29.455 21:18:55 -- common/autotest_common.sh@641 -- # waitforlisten 2489611 00:04:29.455 21:18:55 -- common/autotest_common.sh@817 -- # '[' -z 2489611 ']' 00:04:29.455 21:18:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.455 21:18:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:29.455 21:18:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.455 21:18:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:29.455 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:04:29.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2489611) - No such process 00:04:29.455 ERROR: process (pid: 2489611) is no longer running 00:04:29.455 21:18:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:29.455 21:18:55 -- common/autotest_common.sh@850 -- # return 1 00:04:29.455 21:18:55 -- common/autotest_common.sh@641 -- # es=1 00:04:29.455 21:18:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:29.455 21:18:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:29.455 21:18:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:29.455 21:18:55 -- event/cpu_locks.sh@54 -- # no_locks 00:04:29.455 21:18:55 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:29.455 21:18:55 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:29.455 21:18:55 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:29.455 00:04:29.455 real 0m1.260s 00:04:29.455 user 0m1.176s 00:04:29.455 sys 0m0.541s 00:04:29.455 21:18:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:29.455 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:04:29.455 ************************************ 00:04:29.455 END TEST default_locks 00:04:29.455 ************************************ 00:04:29.455 21:18:55 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:29.456 21:18:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.456 21:18:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.456 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:04:29.714 ************************************ 00:04:29.714 START TEST default_locks_via_rpc 00:04:29.714 ************************************ 00:04:29.714 21:18:55 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:04:29.714 21:18:55 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2489787 00:04:29.714 21:18:55 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.714 21:18:55 -- event/cpu_locks.sh@63 -- # waitforlisten 2489787 00:04:29.714 21:18:55 -- common/autotest_common.sh@817 -- # '[' -z 2489787 ']' 00:04:29.714 21:18:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.714 21:18:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:29.714 21:18:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.714 21:18:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:29.714 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:04:29.714 [2024-04-24 21:18:55.252996] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:29.714 [2024-04-24 21:18:55.253073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2489787 ] 00:04:29.714 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.714 [2024-04-24 21:18:55.309842] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.973 [2024-04-24 21:18:55.419148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.231 21:18:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:30.231 21:18:55 -- common/autotest_common.sh@850 -- # return 0 00:04:30.231 21:18:55 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:30.231 21:18:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:30.231 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:04:30.231 21:18:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:30.231 21:18:55 -- event/cpu_locks.sh@67 -- # no_locks 00:04:30.231 21:18:55 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:30.231 21:18:55 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:30.231 21:18:55 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:30.231 21:18:55 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:30.231 21:18:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:30.231 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:04:30.231 21:18:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:30.231 21:18:55 -- event/cpu_locks.sh@71 -- # locks_exist 2489787 00:04:30.231 21:18:55 -- event/cpu_locks.sh@22 -- # lslocks -p 2489787 00:04:30.231 21:18:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:30.489 21:18:55 -- event/cpu_locks.sh@73 -- # killprocess 2489787 00:04:30.489 21:18:55 -- common/autotest_common.sh@936 -- # '[' -z 2489787 ']' 00:04:30.489 21:18:55 -- common/autotest_common.sh@940 -- # kill -0 2489787 00:04:30.489 21:18:55 -- common/autotest_common.sh@941 -- # uname 00:04:30.489 21:18:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:30.489 21:18:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2489787 00:04:30.489 21:18:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:30.489 21:18:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:30.489 21:18:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2489787' 00:04:30.489 killing process with pid 2489787 00:04:30.489 21:18:55 -- common/autotest_common.sh@955 -- # kill 2489787 00:04:30.489 21:18:55 -- common/autotest_common.sh@960 -- # wait 2489787 00:04:31.056 00:04:31.056 real 0m1.254s 00:04:31.056 user 0m1.178s 00:04:31.056 sys 0m0.543s 00:04:31.056 21:18:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:31.056 21:18:56 -- common/autotest_common.sh@10 -- # set +x 00:04:31.056 ************************************ 00:04:31.056 END TEST default_locks_via_rpc 00:04:31.056 ************************************ 00:04:31.056 21:18:56 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:31.056 21:18:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:31.056 21:18:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:31.056 21:18:56 -- common/autotest_common.sh@10 -- # set +x 00:04:31.056 ************************************ 00:04:31.056 START TEST non_locking_app_on_locked_coremask 00:04:31.056 ************************************ 00:04:31.056 21:18:56 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:04:31.056 21:18:56 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2489958 00:04:31.056 21:18:56 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.056 21:18:56 -- event/cpu_locks.sh@81 -- # waitforlisten 2489958 /var/tmp/spdk.sock 00:04:31.056 21:18:56 -- common/autotest_common.sh@817 -- # '[' -z 2489958 ']' 00:04:31.056 21:18:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.056 21:18:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:31.056 21:18:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.056 21:18:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:31.056 21:18:56 -- common/autotest_common.sh@10 -- # set +x 00:04:31.056 [2024-04-24 21:18:56.638317] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:31.056 [2024-04-24 21:18:56.638398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2489958 ] 00:04:31.056 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.056 [2024-04-24 21:18:56.695265] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.314 [2024-04-24 21:18:56.804093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.572 21:18:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:31.572 21:18:57 -- common/autotest_common.sh@850 -- # return 0 00:04:31.572 21:18:57 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2490080 00:04:31.572 21:18:57 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:31.572 21:18:57 -- event/cpu_locks.sh@85 -- # waitforlisten 2490080 /var/tmp/spdk2.sock 00:04:31.572 21:18:57 -- common/autotest_common.sh@817 -- # '[' -z 2490080 ']' 00:04:31.572 21:18:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:31.572 21:18:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:31.572 21:18:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:31.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:31.572 21:18:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:31.572 21:18:57 -- common/autotest_common.sh@10 -- # set +x 00:04:31.572 [2024-04-24 21:18:57.112746] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:31.572 [2024-04-24 21:18:57.112819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2490080 ] 00:04:31.572 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.572 [2024-04-24 21:18:57.202558] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:31.572 [2024-04-24 21:18:57.202592] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.830 [2024-04-24 21:18:57.437029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.397 21:18:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:32.397 21:18:58 -- common/autotest_common.sh@850 -- # return 0 00:04:32.397 21:18:58 -- event/cpu_locks.sh@87 -- # locks_exist 2489958 00:04:32.397 21:18:58 -- event/cpu_locks.sh@22 -- # lslocks -p 2489958 00:04:32.397 21:18:58 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:32.961 lslocks: write error 00:04:32.961 21:18:58 -- event/cpu_locks.sh@89 -- # killprocess 2489958 00:04:32.961 21:18:58 -- common/autotest_common.sh@936 -- # '[' -z 2489958 ']' 00:04:32.961 21:18:58 -- common/autotest_common.sh@940 -- # kill -0 2489958 00:04:32.961 21:18:58 -- common/autotest_common.sh@941 -- # uname 00:04:32.961 21:18:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:32.961 21:18:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2489958 00:04:32.962 21:18:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:32.962 21:18:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:32.962 21:18:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2489958' 00:04:32.962 killing process with pid 2489958 00:04:32.962 21:18:58 -- common/autotest_common.sh@955 -- # kill 2489958 00:04:32.962 21:18:58 -- common/autotest_common.sh@960 -- # wait 2489958 00:04:33.895 21:18:59 -- event/cpu_locks.sh@90 -- # killprocess 2490080 00:04:33.895 21:18:59 -- common/autotest_common.sh@936 -- # '[' -z 2490080 ']' 00:04:33.895 21:18:59 -- common/autotest_common.sh@940 -- # kill -0 2490080 00:04:33.895 21:18:59 -- common/autotest_common.sh@941 -- # uname 00:04:33.895 21:18:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:33.895 21:18:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2490080 00:04:33.895 21:18:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:33.895 21:18:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:33.895 21:18:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2490080' 00:04:33.895 killing process with pid 2490080 00:04:33.895 21:18:59 -- common/autotest_common.sh@955 -- # kill 2490080 00:04:33.895 21:18:59 -- common/autotest_common.sh@960 -- # wait 2490080 00:04:34.460 00:04:34.460 real 0m3.277s 00:04:34.460 user 0m3.406s 00:04:34.460 sys 0m1.018s 00:04:34.460 21:18:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:34.460 21:18:59 -- common/autotest_common.sh@10 -- # set +x 00:04:34.460 ************************************ 00:04:34.460 END TEST non_locking_app_on_locked_coremask 00:04:34.460 ************************************ 00:04:34.460 21:18:59 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:34.460 21:18:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:34.460 21:18:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:34.460 21:18:59 -- common/autotest_common.sh@10 -- # set +x 00:04:34.460 ************************************ 00:04:34.460 START TEST locking_app_on_unlocked_coremask 00:04:34.460 ************************************ 00:04:34.460 21:18:59 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:04:34.460 21:18:59 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2490399 00:04:34.460 21:18:59 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:34.460 21:18:59 -- event/cpu_locks.sh@99 -- # waitforlisten 2490399 /var/tmp/spdk.sock 00:04:34.460 21:18:59 -- common/autotest_common.sh@817 -- # '[' -z 2490399 ']' 00:04:34.460 21:18:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.460 21:18:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:34.460 21:18:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.460 21:18:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:34.460 21:18:59 -- common/autotest_common.sh@10 -- # set +x 00:04:34.460 [2024-04-24 21:19:00.037392] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:34.460 [2024-04-24 21:19:00.037522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2490399 ] 00:04:34.460 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.460 [2024-04-24 21:19:00.096918] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:34.460 [2024-04-24 21:19:00.096979] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.718 [2024-04-24 21:19:00.210011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.976 21:19:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:34.976 21:19:00 -- common/autotest_common.sh@850 -- # return 0 00:04:34.976 21:19:00 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2490527 00:04:34.976 21:19:00 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:34.976 21:19:00 -- event/cpu_locks.sh@103 -- # waitforlisten 2490527 /var/tmp/spdk2.sock 00:04:34.976 21:19:00 -- common/autotest_common.sh@817 -- # '[' -z 2490527 ']' 00:04:34.976 21:19:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:34.976 21:19:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:34.976 21:19:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:34.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:34.976 21:19:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:34.976 21:19:00 -- common/autotest_common.sh@10 -- # set +x 00:04:34.976 [2024-04-24 21:19:00.524435] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:34.976 [2024-04-24 21:19:00.524507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2490527 ] 00:04:34.976 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.976 [2024-04-24 21:19:00.617078] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.234 [2024-04-24 21:19:00.855127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.800 21:19:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:35.800 21:19:01 -- common/autotest_common.sh@850 -- # return 0 00:04:35.800 21:19:01 -- event/cpu_locks.sh@105 -- # locks_exist 2490527 00:04:35.800 21:19:01 -- event/cpu_locks.sh@22 -- # lslocks -p 2490527 00:04:35.800 21:19:01 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:36.367 lslocks: write error 00:04:36.367 21:19:01 -- event/cpu_locks.sh@107 -- # killprocess 2490399 00:04:36.367 21:19:01 -- common/autotest_common.sh@936 -- # '[' -z 2490399 ']' 00:04:36.367 21:19:01 -- common/autotest_common.sh@940 -- # kill -0 2490399 00:04:36.367 21:19:01 -- common/autotest_common.sh@941 -- # uname 00:04:36.367 21:19:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:36.367 21:19:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2490399 00:04:36.367 21:19:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:36.367 21:19:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:36.367 21:19:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2490399' 00:04:36.367 killing process with pid 2490399 00:04:36.367 21:19:01 -- common/autotest_common.sh@955 -- # kill 2490399 00:04:36.367 21:19:01 -- common/autotest_common.sh@960 -- # wait 2490399 00:04:37.301 21:19:02 -- event/cpu_locks.sh@108 -- # killprocess 2490527 00:04:37.301 21:19:02 -- common/autotest_common.sh@936 -- # '[' -z 2490527 ']' 00:04:37.301 21:19:02 -- common/autotest_common.sh@940 -- # kill -0 2490527 00:04:37.301 21:19:02 -- common/autotest_common.sh@941 -- # uname 00:04:37.301 21:19:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:37.301 21:19:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2490527 00:04:37.301 21:19:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:37.301 21:19:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:37.301 21:19:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2490527' 00:04:37.301 killing process with pid 2490527 00:04:37.301 21:19:02 -- common/autotest_common.sh@955 -- # kill 2490527 00:04:37.301 21:19:02 -- common/autotest_common.sh@960 -- # wait 2490527 00:04:37.874 00:04:37.874 real 0m3.378s 00:04:37.874 user 0m3.491s 00:04:37.874 sys 0m1.047s 00:04:37.874 21:19:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:37.874 21:19:03 -- common/autotest_common.sh@10 -- # set +x 00:04:37.874 ************************************ 00:04:37.874 END TEST locking_app_on_unlocked_coremask 00:04:37.874 ************************************ 00:04:37.874 21:19:03 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:37.874 21:19:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:37.874 21:19:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.874 21:19:03 -- common/autotest_common.sh@10 -- # set +x 00:04:37.874 ************************************ 00:04:37.874 START TEST locking_app_on_locked_coremask 00:04:37.874 ************************************ 00:04:37.874 21:19:03 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:04:37.874 21:19:03 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2490845 00:04:37.874 21:19:03 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.874 21:19:03 -- event/cpu_locks.sh@116 -- # waitforlisten 2490845 /var/tmp/spdk.sock 00:04:37.874 21:19:03 -- common/autotest_common.sh@817 -- # '[' -z 2490845 ']' 00:04:37.874 21:19:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.874 21:19:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:37.874 21:19:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.874 21:19:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:37.874 21:19:03 -- common/autotest_common.sh@10 -- # set +x 00:04:37.874 [2024-04-24 21:19:03.535797] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:37.875 [2024-04-24 21:19:03.535898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2490845 ] 00:04:38.133 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.133 [2024-04-24 21:19:03.597612] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.133 [2024-04-24 21:19:03.710259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.067 21:19:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:39.067 21:19:04 -- common/autotest_common.sh@850 -- # return 0 00:04:39.067 21:19:04 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2490980 00:04:39.067 21:19:04 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:39.067 21:19:04 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2490980 /var/tmp/spdk2.sock 00:04:39.067 21:19:04 -- common/autotest_common.sh@638 -- # local es=0 00:04:39.067 21:19:04 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2490980 /var/tmp/spdk2.sock 00:04:39.067 21:19:04 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:04:39.067 21:19:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:39.067 21:19:04 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:04:39.067 21:19:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:39.067 21:19:04 -- common/autotest_common.sh@641 -- # waitforlisten 2490980 /var/tmp/spdk2.sock 00:04:39.067 21:19:04 -- common/autotest_common.sh@817 -- # '[' -z 2490980 ']' 00:04:39.067 21:19:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:39.067 21:19:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:39.067 21:19:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:39.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:39.067 21:19:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:39.067 21:19:04 -- common/autotest_common.sh@10 -- # set +x 00:04:39.067 [2024-04-24 21:19:04.502901] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:39.067 [2024-04-24 21:19:04.502995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2490980 ] 00:04:39.067 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.067 [2024-04-24 21:19:04.601084] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2490845 has claimed it. 00:04:39.067 [2024-04-24 21:19:04.601149] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:39.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2490980) - No such process 00:04:39.632 ERROR: process (pid: 2490980) is no longer running 00:04:39.632 21:19:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:39.632 21:19:05 -- common/autotest_common.sh@850 -- # return 1 00:04:39.632 21:19:05 -- common/autotest_common.sh@641 -- # es=1 00:04:39.632 21:19:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:39.632 21:19:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:39.632 21:19:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:39.632 21:19:05 -- event/cpu_locks.sh@122 -- # locks_exist 2490845 00:04:39.632 21:19:05 -- event/cpu_locks.sh@22 -- # lslocks -p 2490845 00:04:39.632 21:19:05 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:39.890 lslocks: write error 00:04:39.890 21:19:05 -- event/cpu_locks.sh@124 -- # killprocess 2490845 00:04:39.890 21:19:05 -- common/autotest_common.sh@936 -- # '[' -z 2490845 ']' 00:04:39.890 21:19:05 -- common/autotest_common.sh@940 -- # kill -0 2490845 00:04:39.890 21:19:05 -- common/autotest_common.sh@941 -- # uname 00:04:39.890 21:19:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:39.890 21:19:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2490845 00:04:39.890 21:19:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:39.890 21:19:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:39.890 21:19:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2490845' 00:04:39.890 killing process with pid 2490845 00:04:39.890 21:19:05 -- common/autotest_common.sh@955 -- # kill 2490845 00:04:39.890 21:19:05 -- common/autotest_common.sh@960 -- # wait 2490845 00:04:40.496 00:04:40.496 real 0m2.526s 00:04:40.496 user 0m2.880s 00:04:40.496 sys 0m0.684s 00:04:40.496 21:19:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:40.496 21:19:06 -- common/autotest_common.sh@10 -- # set +x 00:04:40.496 ************************************ 00:04:40.496 END TEST locking_app_on_locked_coremask 00:04:40.496 ************************************ 00:04:40.496 21:19:06 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:40.496 21:19:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.496 21:19:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.496 21:19:06 -- common/autotest_common.sh@10 -- # set +x 00:04:40.496 ************************************ 00:04:40.496 START TEST locking_overlapped_coremask 00:04:40.496 ************************************ 00:04:40.496 21:19:06 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:04:40.496 21:19:06 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2491273 00:04:40.496 21:19:06 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:40.496 21:19:06 -- event/cpu_locks.sh@133 -- # waitforlisten 2491273 /var/tmp/spdk.sock 00:04:40.496 21:19:06 -- common/autotest_common.sh@817 -- # '[' -z 2491273 ']' 00:04:40.496 21:19:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.496 21:19:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:40.496 21:19:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.496 21:19:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:40.496 21:19:06 -- common/autotest_common.sh@10 -- # set +x 00:04:40.755 [2024-04-24 21:19:06.183839] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:40.755 [2024-04-24 21:19:06.183931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2491273 ] 00:04:40.755 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.755 [2024-04-24 21:19:06.245138] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:40.755 [2024-04-24 21:19:06.355235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.755 [2024-04-24 21:19:06.355291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.755 [2024-04-24 21:19:06.355294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.014 21:19:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:41.014 21:19:06 -- common/autotest_common.sh@850 -- # return 0 00:04:41.014 21:19:06 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2491285 00:04:41.014 21:19:06 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2491285 /var/tmp/spdk2.sock 00:04:41.014 21:19:06 -- common/autotest_common.sh@638 -- # local es=0 00:04:41.014 21:19:06 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:41.014 21:19:06 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2491285 /var/tmp/spdk2.sock 00:04:41.014 21:19:06 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:04:41.014 21:19:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:41.014 21:19:06 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:04:41.014 21:19:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:41.014 21:19:06 -- common/autotest_common.sh@641 -- # waitforlisten 2491285 /var/tmp/spdk2.sock 00:04:41.014 21:19:06 -- common/autotest_common.sh@817 -- # '[' -z 2491285 ']' 00:04:41.014 21:19:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:41.014 21:19:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:41.014 21:19:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:41.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:41.014 21:19:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:41.014 21:19:06 -- common/autotest_common.sh@10 -- # set +x 00:04:41.014 [2024-04-24 21:19:06.665867] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:41.014 [2024-04-24 21:19:06.665958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2491285 ] 00:04:41.014 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.272 [2024-04-24 21:19:06.754823] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2491273 has claimed it. 00:04:41.272 [2024-04-24 21:19:06.754881] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:41.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2491285) - No such process 00:04:41.839 ERROR: process (pid: 2491285) is no longer running 00:04:41.839 21:19:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:41.839 21:19:07 -- common/autotest_common.sh@850 -- # return 1 00:04:41.839 21:19:07 -- common/autotest_common.sh@641 -- # es=1 00:04:41.839 21:19:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:41.839 21:19:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:41.839 21:19:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:41.839 21:19:07 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:41.839 21:19:07 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:41.839 21:19:07 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:41.839 21:19:07 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:41.839 21:19:07 -- event/cpu_locks.sh@141 -- # killprocess 2491273 00:04:41.839 21:19:07 -- common/autotest_common.sh@936 -- # '[' -z 2491273 ']' 00:04:41.839 21:19:07 -- common/autotest_common.sh@940 -- # kill -0 2491273 00:04:41.839 21:19:07 -- common/autotest_common.sh@941 -- # uname 00:04:41.839 21:19:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:41.839 21:19:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2491273 00:04:41.839 21:19:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:41.839 21:19:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:41.839 21:19:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2491273' 00:04:41.839 killing process with pid 2491273 00:04:41.839 21:19:07 -- common/autotest_common.sh@955 -- # kill 2491273 00:04:41.839 21:19:07 -- common/autotest_common.sh@960 -- # wait 2491273 00:04:42.406 00:04:42.406 real 0m1.720s 00:04:42.406 user 0m4.550s 00:04:42.406 sys 0m0.458s 00:04:42.406 21:19:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:42.406 21:19:07 -- common/autotest_common.sh@10 -- # set +x 00:04:42.406 ************************************ 00:04:42.406 END TEST locking_overlapped_coremask 00:04:42.406 ************************************ 00:04:42.406 21:19:07 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:42.406 21:19:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.406 21:19:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.406 21:19:07 -- common/autotest_common.sh@10 -- # set +x 00:04:42.406 ************************************ 00:04:42.406 START TEST locking_overlapped_coremask_via_rpc 00:04:42.406 ************************************ 00:04:42.406 21:19:07 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:04:42.406 21:19:07 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2491457 00:04:42.406 21:19:07 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:42.406 21:19:07 -- event/cpu_locks.sh@149 -- # waitforlisten 2491457 /var/tmp/spdk.sock 00:04:42.406 21:19:07 -- common/autotest_common.sh@817 -- # '[' -z 2491457 ']' 00:04:42.406 21:19:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.406 21:19:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:42.406 21:19:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.406 21:19:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:42.406 21:19:07 -- common/autotest_common.sh@10 -- # set +x 00:04:42.406 [2024-04-24 21:19:08.031349] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:42.406 [2024-04-24 21:19:08.031445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2491457 ] 00:04:42.406 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.665 [2024-04-24 21:19:08.098010] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:42.665 [2024-04-24 21:19:08.098053] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:42.665 [2024-04-24 21:19:08.217308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.665 [2024-04-24 21:19:08.217373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:42.665 [2024-04-24 21:19:08.217376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.923 21:19:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:42.923 21:19:08 -- common/autotest_common.sh@850 -- # return 0 00:04:42.923 21:19:08 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2491591 00:04:42.923 21:19:08 -- event/cpu_locks.sh@153 -- # waitforlisten 2491591 /var/tmp/spdk2.sock 00:04:42.923 21:19:08 -- common/autotest_common.sh@817 -- # '[' -z 2491591 ']' 00:04:42.924 21:19:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:42.924 21:19:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:42.924 21:19:08 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:42.924 21:19:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:42.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:42.924 21:19:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:42.924 21:19:08 -- common/autotest_common.sh@10 -- # set +x 00:04:42.924 [2024-04-24 21:19:08.511686] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:42.924 [2024-04-24 21:19:08.511782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2491591 ] 00:04:42.924 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.924 [2024-04-24 21:19:08.600257] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:42.924 [2024-04-24 21:19:08.600305] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:43.182 [2024-04-24 21:19:08.817501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.182 [2024-04-24 21:19:08.817565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:04:43.182 [2024-04-24 21:19:08.817568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.117 21:19:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:44.117 21:19:09 -- common/autotest_common.sh@850 -- # return 0 00:04:44.117 21:19:09 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:44.117 21:19:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:44.117 21:19:09 -- common/autotest_common.sh@10 -- # set +x 00:04:44.117 21:19:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:44.117 21:19:09 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:44.117 21:19:09 -- common/autotest_common.sh@638 -- # local es=0 00:04:44.117 21:19:09 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:44.117 21:19:09 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:04:44.117 21:19:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:44.117 21:19:09 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:04:44.117 21:19:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:44.117 21:19:09 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:44.117 21:19:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:44.117 21:19:09 -- common/autotest_common.sh@10 -- # set +x 00:04:44.117 [2024-04-24 21:19:09.464718] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2491457 has claimed it. 00:04:44.117 request: 00:04:44.117 { 00:04:44.117 "method": "framework_enable_cpumask_locks", 00:04:44.117 "req_id": 1 00:04:44.117 } 00:04:44.117 Got JSON-RPC error response 00:04:44.117 response: 00:04:44.117 { 00:04:44.117 "code": -32603, 00:04:44.117 "message": "Failed to claim CPU core: 2" 00:04:44.117 } 00:04:44.117 21:19:09 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:44.117 21:19:09 -- common/autotest_common.sh@641 -- # es=1 00:04:44.117 21:19:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:44.117 21:19:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:44.117 21:19:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:44.117 21:19:09 -- event/cpu_locks.sh@158 -- # waitforlisten 2491457 /var/tmp/spdk.sock 00:04:44.117 21:19:09 -- common/autotest_common.sh@817 -- # '[' -z 2491457 ']' 00:04:44.117 21:19:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.117 21:19:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:44.117 21:19:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.117 21:19:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:44.117 21:19:09 -- common/autotest_common.sh@10 -- # set +x 00:04:44.117 21:19:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:44.117 21:19:09 -- common/autotest_common.sh@850 -- # return 0 00:04:44.117 21:19:09 -- event/cpu_locks.sh@159 -- # waitforlisten 2491591 /var/tmp/spdk2.sock 00:04:44.117 21:19:09 -- common/autotest_common.sh@817 -- # '[' -z 2491591 ']' 00:04:44.117 21:19:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:44.117 21:19:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:44.117 21:19:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:44.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:44.117 21:19:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:44.117 21:19:09 -- common/autotest_common.sh@10 -- # set +x 00:04:44.376 21:19:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:44.376 21:19:09 -- common/autotest_common.sh@850 -- # return 0 00:04:44.376 21:19:09 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:44.376 21:19:09 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:44.376 21:19:09 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:44.376 21:19:09 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:44.376 00:04:44.376 real 0m1.973s 00:04:44.376 user 0m1.031s 00:04:44.376 sys 0m0.171s 00:04:44.376 21:19:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:44.376 21:19:09 -- common/autotest_common.sh@10 -- # set +x 00:04:44.376 ************************************ 00:04:44.376 END TEST locking_overlapped_coremask_via_rpc 00:04:44.376 ************************************ 00:04:44.376 21:19:09 -- event/cpu_locks.sh@174 -- # cleanup 00:04:44.376 21:19:09 -- event/cpu_locks.sh@15 -- # [[ -z 2491457 ]] 00:04:44.376 21:19:09 -- event/cpu_locks.sh@15 -- # killprocess 2491457 00:04:44.376 21:19:09 -- common/autotest_common.sh@936 -- # '[' -z 2491457 ']' 00:04:44.376 21:19:09 -- common/autotest_common.sh@940 -- # kill -0 2491457 00:04:44.376 21:19:09 -- common/autotest_common.sh@941 -- # uname 00:04:44.376 21:19:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:44.376 21:19:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2491457 00:04:44.376 21:19:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:44.376 21:19:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:44.376 21:19:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2491457' 00:04:44.376 killing process with pid 2491457 00:04:44.376 21:19:10 -- common/autotest_common.sh@955 -- # kill 2491457 00:04:44.376 21:19:10 -- common/autotest_common.sh@960 -- # wait 2491457 00:04:44.942 21:19:10 -- event/cpu_locks.sh@16 -- # [[ -z 2491591 ]] 00:04:44.942 21:19:10 -- event/cpu_locks.sh@16 -- # killprocess 2491591 00:04:44.942 21:19:10 -- common/autotest_common.sh@936 -- # '[' -z 2491591 ']' 00:04:44.942 21:19:10 -- common/autotest_common.sh@940 -- # kill -0 2491591 00:04:44.942 21:19:10 -- common/autotest_common.sh@941 -- # uname 00:04:44.942 21:19:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:44.942 21:19:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2491591 00:04:44.942 21:19:10 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:44.942 21:19:10 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:44.942 21:19:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2491591' 00:04:44.942 killing process with pid 2491591 00:04:44.942 21:19:10 -- common/autotest_common.sh@955 -- # kill 2491591 00:04:44.942 21:19:10 -- common/autotest_common.sh@960 -- # wait 2491591 00:04:45.509 21:19:10 -- event/cpu_locks.sh@18 -- # rm -f 00:04:45.509 21:19:10 -- event/cpu_locks.sh@1 -- # cleanup 00:04:45.509 21:19:10 -- event/cpu_locks.sh@15 -- # [[ -z 2491457 ]] 00:04:45.509 21:19:10 -- event/cpu_locks.sh@15 -- # killprocess 2491457 00:04:45.509 21:19:10 -- common/autotest_common.sh@936 -- # '[' -z 2491457 ']' 00:04:45.509 21:19:10 -- common/autotest_common.sh@940 -- # kill -0 2491457 00:04:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2491457) - No such process 00:04:45.510 21:19:10 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2491457 is not found' 00:04:45.510 Process with pid 2491457 is not found 00:04:45.510 21:19:10 -- event/cpu_locks.sh@16 -- # [[ -z 2491591 ]] 00:04:45.510 21:19:10 -- event/cpu_locks.sh@16 -- # killprocess 2491591 00:04:45.510 21:19:10 -- common/autotest_common.sh@936 -- # '[' -z 2491591 ']' 00:04:45.510 21:19:10 -- common/autotest_common.sh@940 -- # kill -0 2491591 00:04:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2491591) - No such process 00:04:45.510 21:19:10 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2491591 is not found' 00:04:45.510 Process with pid 2491591 is not found 00:04:45.510 21:19:10 -- event/cpu_locks.sh@18 -- # rm -f 00:04:45.510 00:04:45.510 real 0m17.240s 00:04:45.510 user 0m28.697s 00:04:45.510 sys 0m5.587s 00:04:45.510 21:19:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:45.510 21:19:10 -- common/autotest_common.sh@10 -- # set +x 00:04:45.510 ************************************ 00:04:45.510 END TEST cpu_locks 00:04:45.510 ************************************ 00:04:45.510 00:04:45.510 real 0m43.231s 00:04:45.510 user 1m18.849s 00:04:45.510 sys 0m9.876s 00:04:45.510 21:19:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:45.510 21:19:10 -- common/autotest_common.sh@10 -- # set +x 00:04:45.510 ************************************ 00:04:45.510 END TEST event 00:04:45.510 ************************************ 00:04:45.510 21:19:10 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:45.510 21:19:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.510 21:19:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.510 21:19:10 -- common/autotest_common.sh@10 -- # set +x 00:04:45.510 ************************************ 00:04:45.510 START TEST thread 00:04:45.510 ************************************ 00:04:45.510 21:19:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:45.510 * Looking for test storage... 00:04:45.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:45.510 21:19:11 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:45.510 21:19:11 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:04:45.510 21:19:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.510 21:19:11 -- common/autotest_common.sh@10 -- # set +x 00:04:45.768 ************************************ 00:04:45.768 START TEST thread_poller_perf 00:04:45.768 ************************************ 00:04:45.768 21:19:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:45.769 [2024-04-24 21:19:11.230041] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:45.769 [2024-04-24 21:19:11.230102] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2491966 ] 00:04:45.769 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.769 [2024-04-24 21:19:11.291457] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.769 [2024-04-24 21:19:11.405579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.769 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:47.142 ====================================== 00:04:47.142 busy:2711080071 (cyc) 00:04:47.142 total_run_count: 351000 00:04:47.142 tsc_hz: 2700000000 (cyc) 00:04:47.142 ====================================== 00:04:47.142 poller_cost: 7723 (cyc), 2860 (nsec) 00:04:47.142 00:04:47.142 real 0m1.304s 00:04:47.142 user 0m1.214s 00:04:47.142 sys 0m0.084s 00:04:47.142 21:19:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:47.142 21:19:12 -- common/autotest_common.sh@10 -- # set +x 00:04:47.142 ************************************ 00:04:47.142 END TEST thread_poller_perf 00:04:47.142 ************************************ 00:04:47.142 21:19:12 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:47.142 21:19:12 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:04:47.142 21:19:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.142 21:19:12 -- common/autotest_common.sh@10 -- # set +x 00:04:47.142 ************************************ 00:04:47.142 START TEST thread_poller_perf 00:04:47.142 ************************************ 00:04:47.142 21:19:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:47.143 [2024-04-24 21:19:12.652306] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:47.143 [2024-04-24 21:19:12.652368] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492137 ] 00:04:47.143 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.143 [2024-04-24 21:19:12.713656] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.400 [2024-04-24 21:19:12.836936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.400 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:48.334 ====================================== 00:04:48.334 busy:2702738895 (cyc) 00:04:48.334 total_run_count: 4423000 00:04:48.334 tsc_hz: 2700000000 (cyc) 00:04:48.334 ====================================== 00:04:48.334 poller_cost: 611 (cyc), 226 (nsec) 00:04:48.334 00:04:48.334 real 0m1.312s 00:04:48.334 user 0m1.227s 00:04:48.334 sys 0m0.080s 00:04:48.334 21:19:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:48.334 21:19:13 -- common/autotest_common.sh@10 -- # set +x 00:04:48.334 ************************************ 00:04:48.334 END TEST thread_poller_perf 00:04:48.334 ************************************ 00:04:48.334 21:19:13 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:48.334 00:04:48.334 real 0m2.911s 00:04:48.334 user 0m2.552s 00:04:48.334 sys 0m0.335s 00:04:48.334 21:19:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:48.334 21:19:13 -- common/autotest_common.sh@10 -- # set +x 00:04:48.334 ************************************ 00:04:48.334 END TEST thread 00:04:48.334 ************************************ 00:04:48.334 21:19:13 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:48.334 21:19:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.334 21:19:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.334 21:19:13 -- common/autotest_common.sh@10 -- # set +x 00:04:48.594 ************************************ 00:04:48.594 START TEST accel 00:04:48.594 ************************************ 00:04:48.594 21:19:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:48.594 * Looking for test storage... 00:04:48.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:04:48.594 21:19:14 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:04:48.594 21:19:14 -- accel/accel.sh@82 -- # get_expected_opcs 00:04:48.594 21:19:14 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:48.594 21:19:14 -- accel/accel.sh@62 -- # spdk_tgt_pid=2492454 00:04:48.594 21:19:14 -- accel/accel.sh@63 -- # waitforlisten 2492454 00:04:48.594 21:19:14 -- common/autotest_common.sh@817 -- # '[' -z 2492454 ']' 00:04:48.594 21:19:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.594 21:19:14 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:04:48.594 21:19:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:48.594 21:19:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.594 21:19:14 -- accel/accel.sh@61 -- # build_accel_config 00:04:48.594 21:19:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:48.594 21:19:14 -- common/autotest_common.sh@10 -- # set +x 00:04:48.594 21:19:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:48.594 21:19:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:48.594 21:19:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:48.594 21:19:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:48.594 21:19:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:48.594 21:19:14 -- accel/accel.sh@40 -- # local IFS=, 00:04:48.594 21:19:14 -- accel/accel.sh@41 -- # jq -r . 00:04:48.594 [2024-04-24 21:19:14.174399] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:48.594 [2024-04-24 21:19:14.174477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492454 ] 00:04:48.594 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.594 [2024-04-24 21:19:14.230391] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.853 [2024-04-24 21:19:14.334586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.112 21:19:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:49.112 21:19:14 -- common/autotest_common.sh@850 -- # return 0 00:04:49.112 21:19:14 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:04:49.112 21:19:14 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:04:49.112 21:19:14 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:04:49.112 21:19:14 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:04:49.112 21:19:14 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:49.112 21:19:14 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:04:49.112 21:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:49.112 21:19:14 -- common/autotest_common.sh@10 -- # set +x 00:04:49.112 21:19:14 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:49.112 21:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:49.112 21:19:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # IFS== 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # read -r opc module 00:04:49.112 21:19:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.112 21:19:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # IFS== 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # read -r opc module 00:04:49.112 21:19:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.112 21:19:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # IFS== 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # read -r opc module 00:04:49.112 21:19:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.112 21:19:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # IFS== 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # read -r opc module 00:04:49.112 21:19:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.112 21:19:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # IFS== 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # read -r opc module 00:04:49.112 21:19:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.112 21:19:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # IFS== 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # read -r opc module 00:04:49.112 21:19:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.112 21:19:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # IFS== 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # read -r opc module 00:04:49.112 21:19:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.112 21:19:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # IFS== 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # read -r opc module 00:04:49.112 21:19:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.112 21:19:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # IFS== 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # read -r opc module 00:04:49.112 21:19:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.112 21:19:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # IFS== 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # read -r opc module 00:04:49.112 21:19:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.112 21:19:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # IFS== 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # read -r opc module 00:04:49.112 21:19:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.112 21:19:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # IFS== 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # read -r opc module 00:04:49.112 21:19:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.112 21:19:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # IFS== 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # read -r opc module 00:04:49.112 21:19:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.112 21:19:14 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # IFS== 00:04:49.112 21:19:14 -- accel/accel.sh@72 -- # read -r opc module 00:04:49.112 21:19:14 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.112 21:19:14 -- accel/accel.sh@75 -- # killprocess 2492454 00:04:49.112 21:19:14 -- common/autotest_common.sh@936 -- # '[' -z 2492454 ']' 00:04:49.112 21:19:14 -- common/autotest_common.sh@940 -- # kill -0 2492454 00:04:49.112 21:19:14 -- common/autotest_common.sh@941 -- # uname 00:04:49.112 21:19:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:49.112 21:19:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2492454 00:04:49.112 21:19:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:49.112 21:19:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:49.112 21:19:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2492454' 00:04:49.112 killing process with pid 2492454 00:04:49.112 21:19:14 -- common/autotest_common.sh@955 -- # kill 2492454 00:04:49.112 21:19:14 -- common/autotest_common.sh@960 -- # wait 2492454 00:04:49.680 21:19:15 -- accel/accel.sh@76 -- # trap - ERR 00:04:49.680 21:19:15 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:04:49.680 21:19:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:04:49.680 21:19:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.680 21:19:15 -- common/autotest_common.sh@10 -- # set +x 00:04:49.680 21:19:15 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:04:49.680 21:19:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:04:49.680 21:19:15 -- accel/accel.sh@12 -- # build_accel_config 00:04:49.680 21:19:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:49.680 21:19:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:49.680 21:19:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:49.680 21:19:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:49.680 21:19:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:49.680 21:19:15 -- accel/accel.sh@40 -- # local IFS=, 00:04:49.680 21:19:15 -- accel/accel.sh@41 -- # jq -r . 00:04:49.680 21:19:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.680 21:19:15 -- common/autotest_common.sh@10 -- # set +x 00:04:49.680 21:19:15 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:49.680 21:19:15 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:49.680 21:19:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.680 21:19:15 -- common/autotest_common.sh@10 -- # set +x 00:04:49.680 ************************************ 00:04:49.680 START TEST accel_missing_filename 00:04:49.680 ************************************ 00:04:49.681 21:19:15 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:04:49.681 21:19:15 -- common/autotest_common.sh@638 -- # local es=0 00:04:49.681 21:19:15 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:49.681 21:19:15 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:04:49.681 21:19:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:49.681 21:19:15 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:04:49.681 21:19:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:49.681 21:19:15 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:04:49.681 21:19:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:04:49.681 21:19:15 -- accel/accel.sh@12 -- # build_accel_config 00:04:49.681 21:19:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:49.681 21:19:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:49.681 21:19:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:49.681 21:19:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:49.681 21:19:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:49.681 21:19:15 -- accel/accel.sh@40 -- # local IFS=, 00:04:49.681 21:19:15 -- accel/accel.sh@41 -- # jq -r . 00:04:49.939 [2024-04-24 21:19:15.369451] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:49.939 [2024-04-24 21:19:15.369514] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492639 ] 00:04:49.939 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.939 [2024-04-24 21:19:15.433659] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.939 [2024-04-24 21:19:15.550390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.939 [2024-04-24 21:19:15.612146] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:50.217 [2024-04-24 21:19:15.696404] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:04:50.217 A filename is required. 00:04:50.217 21:19:15 -- common/autotest_common.sh@641 -- # es=234 00:04:50.217 21:19:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:50.217 21:19:15 -- common/autotest_common.sh@650 -- # es=106 00:04:50.217 21:19:15 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:50.217 21:19:15 -- common/autotest_common.sh@658 -- # es=1 00:04:50.217 21:19:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:50.217 00:04:50.217 real 0m0.471s 00:04:50.217 user 0m0.361s 00:04:50.217 sys 0m0.144s 00:04:50.217 21:19:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.217 21:19:15 -- common/autotest_common.sh@10 -- # set +x 00:04:50.217 ************************************ 00:04:50.217 END TEST accel_missing_filename 00:04:50.217 ************************************ 00:04:50.217 21:19:15 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.217 21:19:15 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:04:50.217 21:19:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.217 21:19:15 -- common/autotest_common.sh@10 -- # set +x 00:04:50.476 ************************************ 00:04:50.476 START TEST accel_compress_verify 00:04:50.476 ************************************ 00:04:50.476 21:19:15 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.476 21:19:15 -- common/autotest_common.sh@638 -- # local es=0 00:04:50.476 21:19:15 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.476 21:19:15 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:04:50.476 21:19:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:50.476 21:19:15 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:04:50.476 21:19:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:50.476 21:19:15 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.476 21:19:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.476 21:19:15 -- accel/accel.sh@12 -- # build_accel_config 00:04:50.476 21:19:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:50.476 21:19:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:50.476 21:19:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:50.476 21:19:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:50.476 21:19:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:50.476 21:19:15 -- accel/accel.sh@40 -- # local IFS=, 00:04:50.476 21:19:15 -- accel/accel.sh@41 -- # jq -r . 00:04:50.476 [2024-04-24 21:19:15.969391] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:50.476 [2024-04-24 21:19:15.969454] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492675 ] 00:04:50.476 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.476 [2024-04-24 21:19:16.033670] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.476 [2024-04-24 21:19:16.150164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.735 [2024-04-24 21:19:16.209538] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:50.735 [2024-04-24 21:19:16.291140] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:04:50.735 00:04:50.735 Compression does not support the verify option, aborting. 00:04:50.735 21:19:16 -- common/autotest_common.sh@641 -- # es=161 00:04:50.735 21:19:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:50.735 21:19:16 -- common/autotest_common.sh@650 -- # es=33 00:04:50.735 21:19:16 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:50.735 21:19:16 -- common/autotest_common.sh@658 -- # es=1 00:04:50.735 21:19:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:50.735 00:04:50.735 real 0m0.461s 00:04:50.735 user 0m0.347s 00:04:50.735 sys 0m0.148s 00:04:50.735 21:19:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.735 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:04:50.735 ************************************ 00:04:50.735 END TEST accel_compress_verify 00:04:50.735 ************************************ 00:04:50.993 21:19:16 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:50.993 21:19:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:50.993 21:19:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.993 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:04:50.993 ************************************ 00:04:50.993 START TEST accel_wrong_workload 00:04:50.993 ************************************ 00:04:50.994 21:19:16 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:04:50.994 21:19:16 -- common/autotest_common.sh@638 -- # local es=0 00:04:50.994 21:19:16 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:50.994 21:19:16 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:04:50.994 21:19:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:50.994 21:19:16 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:04:50.994 21:19:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:50.994 21:19:16 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:04:50.994 21:19:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:04:50.994 21:19:16 -- accel/accel.sh@12 -- # build_accel_config 00:04:50.994 21:19:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:50.994 21:19:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:50.994 21:19:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:50.994 21:19:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:50.994 21:19:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:50.994 21:19:16 -- accel/accel.sh@40 -- # local IFS=, 00:04:50.994 21:19:16 -- accel/accel.sh@41 -- # jq -r . 00:04:50.994 Unsupported workload type: foobar 00:04:50.994 [2024-04-24 21:19:16.549552] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:50.994 accel_perf options: 00:04:50.994 [-h help message] 00:04:50.994 [-q queue depth per core] 00:04:50.994 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:50.994 [-T number of threads per core 00:04:50.994 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:50.994 [-t time in seconds] 00:04:50.994 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:50.994 [ dif_verify, , dif_generate, dif_generate_copy 00:04:50.994 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:50.994 [-l for compress/decompress workloads, name of uncompressed input file 00:04:50.994 [-S for crc32c workload, use this seed value (default 0) 00:04:50.994 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:50.994 [-f for fill workload, use this BYTE value (default 255) 00:04:50.994 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:50.994 [-y verify result if this switch is on] 00:04:50.994 [-a tasks to allocate per core (default: same value as -q)] 00:04:50.994 Can be used to spread operations across a wider range of memory. 00:04:50.994 21:19:16 -- common/autotest_common.sh@641 -- # es=1 00:04:50.994 21:19:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:50.994 21:19:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:50.994 21:19:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:50.994 00:04:50.994 real 0m0.025s 00:04:50.994 user 0m0.013s 00:04:50.994 sys 0m0.011s 00:04:50.994 21:19:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.994 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:04:50.994 ************************************ 00:04:50.994 END TEST accel_wrong_workload 00:04:50.994 ************************************ 00:04:50.994 Error: writing output failed: Broken pipe 00:04:50.994 21:19:16 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:50.994 21:19:16 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:04:50.994 21:19:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.994 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:04:51.253 ************************************ 00:04:51.253 START TEST accel_negative_buffers 00:04:51.253 ************************************ 00:04:51.253 21:19:16 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:51.253 21:19:16 -- common/autotest_common.sh@638 -- # local es=0 00:04:51.253 21:19:16 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:51.253 21:19:16 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:04:51.253 21:19:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:51.253 21:19:16 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:04:51.253 21:19:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:51.253 21:19:16 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:04:51.253 21:19:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:04:51.253 21:19:16 -- accel/accel.sh@12 -- # build_accel_config 00:04:51.253 21:19:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.253 21:19:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.253 21:19:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.253 21:19:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.253 21:19:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.253 21:19:16 -- accel/accel.sh@40 -- # local IFS=, 00:04:51.253 21:19:16 -- accel/accel.sh@41 -- # jq -r . 00:04:51.253 -x option must be non-negative. 00:04:51.253 [2024-04-24 21:19:16.696646] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:51.253 accel_perf options: 00:04:51.253 [-h help message] 00:04:51.253 [-q queue depth per core] 00:04:51.253 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:51.253 [-T number of threads per core 00:04:51.253 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:51.253 [-t time in seconds] 00:04:51.253 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:51.253 [ dif_verify, , dif_generate, dif_generate_copy 00:04:51.253 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:51.253 [-l for compress/decompress workloads, name of uncompressed input file 00:04:51.253 [-S for crc32c workload, use this seed value (default 0) 00:04:51.253 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:51.253 [-f for fill workload, use this BYTE value (default 255) 00:04:51.253 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:51.253 [-y verify result if this switch is on] 00:04:51.253 [-a tasks to allocate per core (default: same value as -q)] 00:04:51.253 Can be used to spread operations across a wider range of memory. 00:04:51.253 21:19:16 -- common/autotest_common.sh@641 -- # es=1 00:04:51.253 21:19:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:51.253 21:19:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:51.253 21:19:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:51.253 00:04:51.253 real 0m0.023s 00:04:51.253 user 0m0.014s 00:04:51.253 sys 0m0.008s 00:04:51.253 21:19:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:51.253 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:04:51.253 ************************************ 00:04:51.253 END TEST accel_negative_buffers 00:04:51.253 ************************************ 00:04:51.253 Error: writing output failed: Broken pipe 00:04:51.253 21:19:16 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:51.253 21:19:16 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:04:51.253 21:19:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.253 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:04:51.253 ************************************ 00:04:51.253 START TEST accel_crc32c 00:04:51.253 ************************************ 00:04:51.253 21:19:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:51.253 21:19:16 -- accel/accel.sh@16 -- # local accel_opc 00:04:51.253 21:19:16 -- accel/accel.sh@17 -- # local accel_module 00:04:51.253 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:04:51.253 21:19:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:51.253 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:04:51.253 21:19:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:04:51.253 21:19:16 -- accel/accel.sh@12 -- # build_accel_config 00:04:51.253 21:19:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.253 21:19:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.253 21:19:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.253 21:19:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.253 21:19:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.253 21:19:16 -- accel/accel.sh@40 -- # local IFS=, 00:04:51.253 21:19:16 -- accel/accel.sh@41 -- # jq -r . 00:04:51.253 [2024-04-24 21:19:16.829020] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:51.253 [2024-04-24 21:19:16.829081] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492883 ] 00:04:51.253 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.253 [2024-04-24 21:19:16.891744] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.529 [2024-04-24 21:19:17.009124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.529 21:19:17 -- accel/accel.sh@20 -- # val= 00:04:51.529 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:04:51.529 21:19:17 -- accel/accel.sh@20 -- # val= 00:04:51.529 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:04:51.529 21:19:17 -- accel/accel.sh@20 -- # val=0x1 00:04:51.529 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:04:51.529 21:19:17 -- accel/accel.sh@20 -- # val= 00:04:51.529 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:04:51.529 21:19:17 -- accel/accel.sh@20 -- # val= 00:04:51.529 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:04:51.529 21:19:17 -- accel/accel.sh@20 -- # val=crc32c 00:04:51.529 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.529 21:19:17 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:04:51.529 21:19:17 -- accel/accel.sh@20 -- # val=32 00:04:51.529 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:04:51.529 21:19:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:51.529 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:04:51.529 21:19:17 -- accel/accel.sh@20 -- # val= 00:04:51.529 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:04:51.529 21:19:17 -- accel/accel.sh@20 -- # val=software 00:04:51.529 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.529 21:19:17 -- accel/accel.sh@22 -- # accel_module=software 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:04:51.529 21:19:17 -- accel/accel.sh@20 -- # val=32 00:04:51.529 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:04:51.529 21:19:17 -- accel/accel.sh@20 -- # val=32 00:04:51.529 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:04:51.529 21:19:17 -- accel/accel.sh@20 -- # val=1 00:04:51.529 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:04:51.529 21:19:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:51.529 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:04:51.529 21:19:17 -- accel/accel.sh@20 -- # val=Yes 00:04:51.529 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:04:51.529 21:19:17 -- accel/accel.sh@20 -- # val= 00:04:51.529 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:04:51.529 21:19:17 -- accel/accel.sh@20 -- # val= 00:04:51.529 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:04:51.529 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:04:52.918 21:19:18 -- accel/accel.sh@20 -- # val= 00:04:52.918 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.918 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:52.919 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:52.919 21:19:18 -- accel/accel.sh@20 -- # val= 00:04:52.919 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.919 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:52.919 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:52.919 21:19:18 -- accel/accel.sh@20 -- # val= 00:04:52.919 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.919 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:52.919 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:52.919 21:19:18 -- accel/accel.sh@20 -- # val= 00:04:52.919 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.919 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:52.919 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:52.919 21:19:18 -- accel/accel.sh@20 -- # val= 00:04:52.919 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.919 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:52.919 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:52.919 21:19:18 -- accel/accel.sh@20 -- # val= 00:04:52.919 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.919 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:52.919 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:52.919 21:19:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:52.919 21:19:18 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:52.919 21:19:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:52.919 00:04:52.919 real 0m1.454s 00:04:52.919 user 0m1.315s 00:04:52.919 sys 0m0.141s 00:04:52.919 21:19:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:52.919 21:19:18 -- common/autotest_common.sh@10 -- # set +x 00:04:52.919 ************************************ 00:04:52.919 END TEST accel_crc32c 00:04:52.919 ************************************ 00:04:52.919 21:19:18 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:04:52.919 21:19:18 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:04:52.919 21:19:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.919 21:19:18 -- common/autotest_common.sh@10 -- # set +x 00:04:52.919 ************************************ 00:04:52.919 START TEST accel_crc32c_C2 00:04:52.919 ************************************ 00:04:52.919 21:19:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:04:52.919 21:19:18 -- accel/accel.sh@16 -- # local accel_opc 00:04:52.919 21:19:18 -- accel/accel.sh@17 -- # local accel_module 00:04:52.919 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:52.919 21:19:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:52.919 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:52.919 21:19:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:04:52.919 21:19:18 -- accel/accel.sh@12 -- # build_accel_config 00:04:52.919 21:19:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:52.919 21:19:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:52.919 21:19:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:52.919 21:19:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:52.919 21:19:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:52.919 21:19:18 -- accel/accel.sh@40 -- # local IFS=, 00:04:52.919 21:19:18 -- accel/accel.sh@41 -- # jq -r . 00:04:52.919 [2024-04-24 21:19:18.412988] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:52.919 [2024-04-24 21:19:18.413056] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2493047 ] 00:04:52.919 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.919 [2024-04-24 21:19:18.475828] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.193 [2024-04-24 21:19:18.593038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.193 21:19:18 -- accel/accel.sh@20 -- # val= 00:04:53.193 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:53.193 21:19:18 -- accel/accel.sh@20 -- # val= 00:04:53.193 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:53.193 21:19:18 -- accel/accel.sh@20 -- # val=0x1 00:04:53.193 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:53.193 21:19:18 -- accel/accel.sh@20 -- # val= 00:04:53.193 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:53.193 21:19:18 -- accel/accel.sh@20 -- # val= 00:04:53.193 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:53.193 21:19:18 -- accel/accel.sh@20 -- # val=crc32c 00:04:53.193 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.193 21:19:18 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:53.193 21:19:18 -- accel/accel.sh@20 -- # val=0 00:04:53.193 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:53.193 21:19:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:53.193 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:53.193 21:19:18 -- accel/accel.sh@20 -- # val= 00:04:53.193 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:53.193 21:19:18 -- accel/accel.sh@20 -- # val=software 00:04:53.193 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.193 21:19:18 -- accel/accel.sh@22 -- # accel_module=software 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:53.193 21:19:18 -- accel/accel.sh@20 -- # val=32 00:04:53.193 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:53.193 21:19:18 -- accel/accel.sh@20 -- # val=32 00:04:53.193 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:53.193 21:19:18 -- accel/accel.sh@20 -- # val=1 00:04:53.193 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:53.193 21:19:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:53.193 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:53.193 21:19:18 -- accel/accel.sh@20 -- # val=Yes 00:04:53.193 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:53.193 21:19:18 -- accel/accel.sh@20 -- # val= 00:04:53.193 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:53.193 21:19:18 -- accel/accel.sh@20 -- # val= 00:04:53.193 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:04:53.193 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:04:54.567 21:19:19 -- accel/accel.sh@20 -- # val= 00:04:54.567 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.567 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:04:54.567 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:04:54.567 21:19:19 -- accel/accel.sh@20 -- # val= 00:04:54.567 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.567 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:04:54.567 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:04:54.567 21:19:19 -- accel/accel.sh@20 -- # val= 00:04:54.567 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.567 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:04:54.567 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:04:54.567 21:19:19 -- accel/accel.sh@20 -- # val= 00:04:54.567 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.567 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:04:54.567 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:04:54.567 21:19:19 -- accel/accel.sh@20 -- # val= 00:04:54.567 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.567 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:04:54.567 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:04:54.567 21:19:19 -- accel/accel.sh@20 -- # val= 00:04:54.567 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.567 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:04:54.567 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:04:54.567 21:19:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:54.567 21:19:19 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:54.567 21:19:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:54.567 00:04:54.567 real 0m1.468s 00:04:54.567 user 0m1.326s 00:04:54.567 sys 0m0.142s 00:04:54.567 21:19:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:54.567 21:19:19 -- common/autotest_common.sh@10 -- # set +x 00:04:54.567 ************************************ 00:04:54.567 END TEST accel_crc32c_C2 00:04:54.567 ************************************ 00:04:54.567 21:19:19 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:04:54.567 21:19:19 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:54.567 21:19:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.567 21:19:19 -- common/autotest_common.sh@10 -- # set +x 00:04:54.567 ************************************ 00:04:54.567 START TEST accel_copy 00:04:54.567 ************************************ 00:04:54.567 21:19:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:04:54.567 21:19:19 -- accel/accel.sh@16 -- # local accel_opc 00:04:54.567 21:19:19 -- accel/accel.sh@17 -- # local accel_module 00:04:54.567 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:04:54.567 21:19:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:04:54.567 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:04:54.567 21:19:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:04:54.567 21:19:19 -- accel/accel.sh@12 -- # build_accel_config 00:04:54.567 21:19:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:54.567 21:19:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:54.567 21:19:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.567 21:19:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.567 21:19:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:54.567 21:19:19 -- accel/accel.sh@40 -- # local IFS=, 00:04:54.567 21:19:19 -- accel/accel.sh@41 -- # jq -r . 00:04:54.567 [2024-04-24 21:19:20.007294] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:54.567 [2024-04-24 21:19:20.007359] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2493331 ] 00:04:54.567 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.567 [2024-04-24 21:19:20.071073] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.567 [2024-04-24 21:19:20.187938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.825 21:19:20 -- accel/accel.sh@20 -- # val= 00:04:54.825 21:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # IFS=: 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # read -r var val 00:04:54.825 21:19:20 -- accel/accel.sh@20 -- # val= 00:04:54.825 21:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # IFS=: 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # read -r var val 00:04:54.825 21:19:20 -- accel/accel.sh@20 -- # val=0x1 00:04:54.825 21:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # IFS=: 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # read -r var val 00:04:54.825 21:19:20 -- accel/accel.sh@20 -- # val= 00:04:54.825 21:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # IFS=: 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # read -r var val 00:04:54.825 21:19:20 -- accel/accel.sh@20 -- # val= 00:04:54.825 21:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # IFS=: 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # read -r var val 00:04:54.825 21:19:20 -- accel/accel.sh@20 -- # val=copy 00:04:54.825 21:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.825 21:19:20 -- accel/accel.sh@23 -- # accel_opc=copy 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # IFS=: 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # read -r var val 00:04:54.825 21:19:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:54.825 21:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # IFS=: 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # read -r var val 00:04:54.825 21:19:20 -- accel/accel.sh@20 -- # val= 00:04:54.825 21:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # IFS=: 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # read -r var val 00:04:54.825 21:19:20 -- accel/accel.sh@20 -- # val=software 00:04:54.825 21:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.825 21:19:20 -- accel/accel.sh@22 -- # accel_module=software 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # IFS=: 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # read -r var val 00:04:54.825 21:19:20 -- accel/accel.sh@20 -- # val=32 00:04:54.825 21:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # IFS=: 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # read -r var val 00:04:54.825 21:19:20 -- accel/accel.sh@20 -- # val=32 00:04:54.825 21:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # IFS=: 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # read -r var val 00:04:54.825 21:19:20 -- accel/accel.sh@20 -- # val=1 00:04:54.825 21:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # IFS=: 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # read -r var val 00:04:54.825 21:19:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:54.825 21:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # IFS=: 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # read -r var val 00:04:54.825 21:19:20 -- accel/accel.sh@20 -- # val=Yes 00:04:54.825 21:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # IFS=: 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # read -r var val 00:04:54.825 21:19:20 -- accel/accel.sh@20 -- # val= 00:04:54.825 21:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # IFS=: 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # read -r var val 00:04:54.825 21:19:20 -- accel/accel.sh@20 -- # val= 00:04:54.825 21:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # IFS=: 00:04:54.825 21:19:20 -- accel/accel.sh@19 -- # read -r var val 00:04:56.198 21:19:21 -- accel/accel.sh@20 -- # val= 00:04:56.198 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.198 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.198 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.198 21:19:21 -- accel/accel.sh@20 -- # val= 00:04:56.198 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.198 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.198 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.198 21:19:21 -- accel/accel.sh@20 -- # val= 00:04:56.198 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.198 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.198 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.198 21:19:21 -- accel/accel.sh@20 -- # val= 00:04:56.198 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.198 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.198 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.198 21:19:21 -- accel/accel.sh@20 -- # val= 00:04:56.198 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.198 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.198 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.198 21:19:21 -- accel/accel.sh@20 -- # val= 00:04:56.198 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.198 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.198 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.198 21:19:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:56.198 21:19:21 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:04:56.198 21:19:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:56.198 00:04:56.198 real 0m1.475s 00:04:56.198 user 0m1.332s 00:04:56.198 sys 0m0.144s 00:04:56.198 21:19:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:56.199 21:19:21 -- common/autotest_common.sh@10 -- # set +x 00:04:56.199 ************************************ 00:04:56.199 END TEST accel_copy 00:04:56.199 ************************************ 00:04:56.199 21:19:21 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:56.199 21:19:21 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:04:56.199 21:19:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.199 21:19:21 -- common/autotest_common.sh@10 -- # set +x 00:04:56.199 ************************************ 00:04:56.199 START TEST accel_fill 00:04:56.199 ************************************ 00:04:56.199 21:19:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:56.199 21:19:21 -- accel/accel.sh@16 -- # local accel_opc 00:04:56.199 21:19:21 -- accel/accel.sh@17 -- # local accel_module 00:04:56.199 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.199 21:19:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:56.199 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.199 21:19:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:56.199 21:19:21 -- accel/accel.sh@12 -- # build_accel_config 00:04:56.199 21:19:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:56.199 21:19:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:56.199 21:19:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:56.199 21:19:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:56.199 21:19:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:56.199 21:19:21 -- accel/accel.sh@40 -- # local IFS=, 00:04:56.199 21:19:21 -- accel/accel.sh@41 -- # jq -r . 00:04:56.199 [2024-04-24 21:19:21.604852] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:56.199 [2024-04-24 21:19:21.604918] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2493499 ] 00:04:56.199 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.199 [2024-04-24 21:19:21.666769] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.199 [2024-04-24 21:19:21.782470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.199 21:19:21 -- accel/accel.sh@20 -- # val= 00:04:56.199 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.199 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.199 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.199 21:19:21 -- accel/accel.sh@20 -- # val= 00:04:56.199 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.199 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.199 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.199 21:19:21 -- accel/accel.sh@20 -- # val=0x1 00:04:56.199 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.199 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.199 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.199 21:19:21 -- accel/accel.sh@20 -- # val= 00:04:56.199 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.199 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.199 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.199 21:19:21 -- accel/accel.sh@20 -- # val= 00:04:56.199 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.199 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.199 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.199 21:19:21 -- accel/accel.sh@20 -- # val=fill 00:04:56.199 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.199 21:19:21 -- accel/accel.sh@23 -- # accel_opc=fill 00:04:56.199 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.199 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.199 21:19:21 -- accel/accel.sh@20 -- # val=0x80 00:04:56.199 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.199 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.199 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.199 21:19:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:56.199 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.199 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.199 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.199 21:19:21 -- accel/accel.sh@20 -- # val= 00:04:56.199 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.199 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.199 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.200 21:19:21 -- accel/accel.sh@20 -- # val=software 00:04:56.200 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.200 21:19:21 -- accel/accel.sh@22 -- # accel_module=software 00:04:56.200 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.200 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.200 21:19:21 -- accel/accel.sh@20 -- # val=64 00:04:56.200 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.200 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.200 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.200 21:19:21 -- accel/accel.sh@20 -- # val=64 00:04:56.200 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.200 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.200 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.200 21:19:21 -- accel/accel.sh@20 -- # val=1 00:04:56.200 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.200 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.200 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.200 21:19:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:56.200 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.200 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.200 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.200 21:19:21 -- accel/accel.sh@20 -- # val=Yes 00:04:56.200 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.200 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.200 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.200 21:19:21 -- accel/accel.sh@20 -- # val= 00:04:56.200 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.200 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.200 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:56.200 21:19:21 -- accel/accel.sh@20 -- # val= 00:04:56.200 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.200 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:04:56.200 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:04:57.577 21:19:23 -- accel/accel.sh@20 -- # val= 00:04:57.577 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.577 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.577 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.577 21:19:23 -- accel/accel.sh@20 -- # val= 00:04:57.577 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.577 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.577 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.577 21:19:23 -- accel/accel.sh@20 -- # val= 00:04:57.577 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.577 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.577 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.577 21:19:23 -- accel/accel.sh@20 -- # val= 00:04:57.577 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.577 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.577 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.577 21:19:23 -- accel/accel.sh@20 -- # val= 00:04:57.577 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.577 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.577 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.577 21:19:23 -- accel/accel.sh@20 -- # val= 00:04:57.577 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.577 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.577 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.577 21:19:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:57.577 21:19:23 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:04:57.577 21:19:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:57.577 00:04:57.577 real 0m1.470s 00:04:57.577 user 0m1.328s 00:04:57.577 sys 0m0.143s 00:04:57.577 21:19:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:57.577 21:19:23 -- common/autotest_common.sh@10 -- # set +x 00:04:57.577 ************************************ 00:04:57.577 END TEST accel_fill 00:04:57.577 ************************************ 00:04:57.577 21:19:23 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:04:57.577 21:19:23 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:57.577 21:19:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.577 21:19:23 -- common/autotest_common.sh@10 -- # set +x 00:04:57.577 ************************************ 00:04:57.577 START TEST accel_copy_crc32c 00:04:57.577 ************************************ 00:04:57.577 21:19:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:04:57.577 21:19:23 -- accel/accel.sh@16 -- # local accel_opc 00:04:57.577 21:19:23 -- accel/accel.sh@17 -- # local accel_module 00:04:57.577 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.577 21:19:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:04:57.577 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.577 21:19:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:04:57.577 21:19:23 -- accel/accel.sh@12 -- # build_accel_config 00:04:57.577 21:19:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:57.577 21:19:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:57.577 21:19:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.577 21:19:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.577 21:19:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:57.577 21:19:23 -- accel/accel.sh@40 -- # local IFS=, 00:04:57.577 21:19:23 -- accel/accel.sh@41 -- # jq -r . 00:04:57.577 [2024-04-24 21:19:23.196384] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:57.577 [2024-04-24 21:19:23.196448] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2493778 ] 00:04:57.577 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.835 [2024-04-24 21:19:23.257265] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.835 [2024-04-24 21:19:23.372189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.835 21:19:23 -- accel/accel.sh@20 -- # val= 00:04:57.835 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.835 21:19:23 -- accel/accel.sh@20 -- # val= 00:04:57.835 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.835 21:19:23 -- accel/accel.sh@20 -- # val=0x1 00:04:57.835 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.835 21:19:23 -- accel/accel.sh@20 -- # val= 00:04:57.835 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.835 21:19:23 -- accel/accel.sh@20 -- # val= 00:04:57.835 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.835 21:19:23 -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:57.835 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.835 21:19:23 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.835 21:19:23 -- accel/accel.sh@20 -- # val=0 00:04:57.835 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.835 21:19:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:57.835 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.835 21:19:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:57.835 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.835 21:19:23 -- accel/accel.sh@20 -- # val= 00:04:57.835 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.835 21:19:23 -- accel/accel.sh@20 -- # val=software 00:04:57.835 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.835 21:19:23 -- accel/accel.sh@22 -- # accel_module=software 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.835 21:19:23 -- accel/accel.sh@20 -- # val=32 00:04:57.835 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.835 21:19:23 -- accel/accel.sh@20 -- # val=32 00:04:57.835 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.835 21:19:23 -- accel/accel.sh@20 -- # val=1 00:04:57.835 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.835 21:19:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:57.835 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.835 21:19:23 -- accel/accel.sh@20 -- # val=Yes 00:04:57.835 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.835 21:19:23 -- accel/accel.sh@20 -- # val= 00:04:57.835 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:57.835 21:19:23 -- accel/accel.sh@20 -- # val= 00:04:57.835 21:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # IFS=: 00:04:57.835 21:19:23 -- accel/accel.sh@19 -- # read -r var val 00:04:59.209 21:19:24 -- accel/accel.sh@20 -- # val= 00:04:59.209 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.209 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:04:59.209 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:04:59.209 21:19:24 -- accel/accel.sh@20 -- # val= 00:04:59.209 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.209 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:04:59.209 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:04:59.209 21:19:24 -- accel/accel.sh@20 -- # val= 00:04:59.209 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.209 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:04:59.209 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:04:59.209 21:19:24 -- accel/accel.sh@20 -- # val= 00:04:59.209 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.209 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:04:59.209 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:04:59.209 21:19:24 -- accel/accel.sh@20 -- # val= 00:04:59.209 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.209 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:04:59.209 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:04:59.209 21:19:24 -- accel/accel.sh@20 -- # val= 00:04:59.209 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.209 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:04:59.209 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:04:59.209 21:19:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:59.209 21:19:24 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:04:59.209 21:19:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:59.209 00:04:59.209 real 0m1.474s 00:04:59.209 user 0m1.327s 00:04:59.209 sys 0m0.149s 00:04:59.209 21:19:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:59.209 21:19:24 -- common/autotest_common.sh@10 -- # set +x 00:04:59.209 ************************************ 00:04:59.209 END TEST accel_copy_crc32c 00:04:59.209 ************************************ 00:04:59.209 21:19:24 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:04:59.209 21:19:24 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:04:59.209 21:19:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.209 21:19:24 -- common/autotest_common.sh@10 -- # set +x 00:04:59.209 ************************************ 00:04:59.209 START TEST accel_copy_crc32c_C2 00:04:59.209 ************************************ 00:04:59.209 21:19:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:04:59.209 21:19:24 -- accel/accel.sh@16 -- # local accel_opc 00:04:59.209 21:19:24 -- accel/accel.sh@17 -- # local accel_module 00:04:59.209 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:04:59.209 21:19:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:04:59.209 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:04:59.209 21:19:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:04:59.209 21:19:24 -- accel/accel.sh@12 -- # build_accel_config 00:04:59.209 21:19:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:59.209 21:19:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:59.209 21:19:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:59.209 21:19:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:59.209 21:19:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:59.209 21:19:24 -- accel/accel.sh@40 -- # local IFS=, 00:04:59.209 21:19:24 -- accel/accel.sh@41 -- # jq -r . 00:04:59.209 [2024-04-24 21:19:24.791466] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:04:59.209 [2024-04-24 21:19:24.791528] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2493944 ] 00:04:59.210 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.210 [2024-04-24 21:19:24.852793] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.468 [2024-04-24 21:19:24.970145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.468 21:19:25 -- accel/accel.sh@20 -- # val= 00:04:59.468 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:04:59.468 21:19:25 -- accel/accel.sh@20 -- # val= 00:04:59.468 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:04:59.468 21:19:25 -- accel/accel.sh@20 -- # val=0x1 00:04:59.468 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:04:59.468 21:19:25 -- accel/accel.sh@20 -- # val= 00:04:59.468 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:04:59.468 21:19:25 -- accel/accel.sh@20 -- # val= 00:04:59.468 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:04:59.468 21:19:25 -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:59.468 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.468 21:19:25 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:04:59.468 21:19:25 -- accel/accel.sh@20 -- # val=0 00:04:59.468 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:04:59.468 21:19:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:59.468 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:04:59.468 21:19:25 -- accel/accel.sh@20 -- # val='8192 bytes' 00:04:59.468 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:04:59.468 21:19:25 -- accel/accel.sh@20 -- # val= 00:04:59.468 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:04:59.468 21:19:25 -- accel/accel.sh@20 -- # val=software 00:04:59.468 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.468 21:19:25 -- accel/accel.sh@22 -- # accel_module=software 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:04:59.468 21:19:25 -- accel/accel.sh@20 -- # val=32 00:04:59.468 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:04:59.468 21:19:25 -- accel/accel.sh@20 -- # val=32 00:04:59.468 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:04:59.468 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:04:59.468 21:19:25 -- accel/accel.sh@20 -- # val=1 00:04:59.469 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.469 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:04:59.469 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:04:59.469 21:19:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:59.469 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.469 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:04:59.469 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:04:59.469 21:19:25 -- accel/accel.sh@20 -- # val=Yes 00:04:59.469 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.469 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:04:59.469 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:04:59.469 21:19:25 -- accel/accel.sh@20 -- # val= 00:04:59.469 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.469 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:04:59.469 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:04:59.469 21:19:25 -- accel/accel.sh@20 -- # val= 00:04:59.469 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.469 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:04:59.469 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:05:00.854 21:19:26 -- accel/accel.sh@20 -- # val= 00:05:00.854 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.854 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:00.854 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:00.854 21:19:26 -- accel/accel.sh@20 -- # val= 00:05:00.854 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.854 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:00.855 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:00.855 21:19:26 -- accel/accel.sh@20 -- # val= 00:05:00.855 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.855 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:00.855 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:00.855 21:19:26 -- accel/accel.sh@20 -- # val= 00:05:00.855 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.855 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:00.855 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:00.855 21:19:26 -- accel/accel.sh@20 -- # val= 00:05:00.855 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.855 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:00.855 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:00.855 21:19:26 -- accel/accel.sh@20 -- # val= 00:05:00.855 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.855 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:00.855 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:00.855 21:19:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:00.855 21:19:26 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:00.855 21:19:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:00.855 00:05:00.855 real 0m1.469s 00:05:00.855 user 0m1.332s 00:05:00.855 sys 0m0.139s 00:05:00.855 21:19:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:00.855 21:19:26 -- common/autotest_common.sh@10 -- # set +x 00:05:00.855 ************************************ 00:05:00.855 END TEST accel_copy_crc32c_C2 00:05:00.855 ************************************ 00:05:00.855 21:19:26 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:00.855 21:19:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:00.855 21:19:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.855 21:19:26 -- common/autotest_common.sh@10 -- # set +x 00:05:00.855 ************************************ 00:05:00.855 START TEST accel_dualcast 00:05:00.855 ************************************ 00:05:00.855 21:19:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:05:00.855 21:19:26 -- accel/accel.sh@16 -- # local accel_opc 00:05:00.855 21:19:26 -- accel/accel.sh@17 -- # local accel_module 00:05:00.855 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:00.855 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:00.855 21:19:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:00.855 21:19:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:00.855 21:19:26 -- accel/accel.sh@12 -- # build_accel_config 00:05:00.855 21:19:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:00.855 21:19:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:00.855 21:19:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:00.855 21:19:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:00.855 21:19:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:00.855 21:19:26 -- accel/accel.sh@40 -- # local IFS=, 00:05:00.855 21:19:26 -- accel/accel.sh@41 -- # jq -r . 00:05:00.855 [2024-04-24 21:19:26.383999] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:00.855 [2024-04-24 21:19:26.384059] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2494123 ] 00:05:00.855 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.855 [2024-04-24 21:19:26.449113] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.113 [2024-04-24 21:19:26.566943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.113 21:19:26 -- accel/accel.sh@20 -- # val= 00:05:01.113 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.113 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:01.113 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:01.113 21:19:26 -- accel/accel.sh@20 -- # val= 00:05:01.113 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.113 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:01.113 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:01.113 21:19:26 -- accel/accel.sh@20 -- # val=0x1 00:05:01.113 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.113 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:01.113 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:01.113 21:19:26 -- accel/accel.sh@20 -- # val= 00:05:01.113 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.113 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:01.113 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:01.113 21:19:26 -- accel/accel.sh@20 -- # val= 00:05:01.113 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.113 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:01.113 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:01.113 21:19:26 -- accel/accel.sh@20 -- # val=dualcast 00:05:01.114 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.114 21:19:26 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:01.114 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:01.114 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:01.114 21:19:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:01.114 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.114 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:01.114 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:01.114 21:19:26 -- accel/accel.sh@20 -- # val= 00:05:01.114 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.114 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:01.114 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:01.114 21:19:26 -- accel/accel.sh@20 -- # val=software 00:05:01.114 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.114 21:19:26 -- accel/accel.sh@22 -- # accel_module=software 00:05:01.114 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:01.114 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:01.114 21:19:26 -- accel/accel.sh@20 -- # val=32 00:05:01.114 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.114 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:01.114 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:01.114 21:19:26 -- accel/accel.sh@20 -- # val=32 00:05:01.114 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.114 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:01.114 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:01.114 21:19:26 -- accel/accel.sh@20 -- # val=1 00:05:01.114 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.114 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:01.114 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:01.114 21:19:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:01.114 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.114 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:01.114 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:01.114 21:19:26 -- accel/accel.sh@20 -- # val=Yes 00:05:01.114 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.114 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:01.114 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:01.114 21:19:26 -- accel/accel.sh@20 -- # val= 00:05:01.114 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.114 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:01.114 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:01.114 21:19:26 -- accel/accel.sh@20 -- # val= 00:05:01.114 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.114 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:05:01.114 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:05:02.488 21:19:27 -- accel/accel.sh@20 -- # val= 00:05:02.488 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.488 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:05:02.488 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:05:02.488 21:19:27 -- accel/accel.sh@20 -- # val= 00:05:02.488 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.488 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:05:02.488 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:05:02.488 21:19:27 -- accel/accel.sh@20 -- # val= 00:05:02.488 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.488 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:05:02.488 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:05:02.488 21:19:27 -- accel/accel.sh@20 -- # val= 00:05:02.488 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.488 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:05:02.488 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:05:02.488 21:19:27 -- accel/accel.sh@20 -- # val= 00:05:02.488 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.488 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:05:02.488 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:05:02.488 21:19:27 -- accel/accel.sh@20 -- # val= 00:05:02.488 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.488 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:05:02.488 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:05:02.488 21:19:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:02.488 21:19:27 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:02.488 21:19:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:02.488 00:05:02.488 real 0m1.475s 00:05:02.488 user 0m1.328s 00:05:02.488 sys 0m0.148s 00:05:02.488 21:19:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:02.488 21:19:27 -- common/autotest_common.sh@10 -- # set +x 00:05:02.488 ************************************ 00:05:02.488 END TEST accel_dualcast 00:05:02.488 ************************************ 00:05:02.488 21:19:27 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:02.489 21:19:27 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:02.513 21:19:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.513 21:19:27 -- common/autotest_common.sh@10 -- # set +x 00:05:02.513 ************************************ 00:05:02.513 START TEST accel_compare 00:05:02.513 ************************************ 00:05:02.513 21:19:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:05:02.513 21:19:27 -- accel/accel.sh@16 -- # local accel_opc 00:05:02.513 21:19:27 -- accel/accel.sh@17 -- # local accel_module 00:05:02.513 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:05:02.513 21:19:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:02.513 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:05:02.513 21:19:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:02.513 21:19:27 -- accel/accel.sh@12 -- # build_accel_config 00:05:02.513 21:19:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:02.513 21:19:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:02.513 21:19:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:02.513 21:19:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:02.513 21:19:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:02.513 21:19:27 -- accel/accel.sh@40 -- # local IFS=, 00:05:02.513 21:19:27 -- accel/accel.sh@41 -- # jq -r . 00:05:02.513 [2024-04-24 21:19:27.973598] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:02.513 [2024-04-24 21:19:27.973690] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2494393 ] 00:05:02.513 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.513 [2024-04-24 21:19:28.034739] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.513 [2024-04-24 21:19:28.151486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.773 21:19:28 -- accel/accel.sh@20 -- # val= 00:05:02.773 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:05:02.773 21:19:28 -- accel/accel.sh@20 -- # val= 00:05:02.773 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:05:02.773 21:19:28 -- accel/accel.sh@20 -- # val=0x1 00:05:02.773 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:05:02.773 21:19:28 -- accel/accel.sh@20 -- # val= 00:05:02.773 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:05:02.773 21:19:28 -- accel/accel.sh@20 -- # val= 00:05:02.773 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:05:02.773 21:19:28 -- accel/accel.sh@20 -- # val=compare 00:05:02.773 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.773 21:19:28 -- accel/accel.sh@23 -- # accel_opc=compare 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:05:02.773 21:19:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:02.773 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:05:02.773 21:19:28 -- accel/accel.sh@20 -- # val= 00:05:02.773 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:05:02.773 21:19:28 -- accel/accel.sh@20 -- # val=software 00:05:02.773 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.773 21:19:28 -- accel/accel.sh@22 -- # accel_module=software 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:05:02.773 21:19:28 -- accel/accel.sh@20 -- # val=32 00:05:02.773 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:05:02.773 21:19:28 -- accel/accel.sh@20 -- # val=32 00:05:02.773 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:05:02.773 21:19:28 -- accel/accel.sh@20 -- # val=1 00:05:02.773 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:05:02.773 21:19:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:02.773 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:05:02.773 21:19:28 -- accel/accel.sh@20 -- # val=Yes 00:05:02.773 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:05:02.773 21:19:28 -- accel/accel.sh@20 -- # val= 00:05:02.773 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:05:02.773 21:19:28 -- accel/accel.sh@20 -- # val= 00:05:02.773 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.773 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:05:03.762 21:19:29 -- accel/accel.sh@20 -- # val= 00:05:03.762 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.762 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:03.762 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:03.762 21:19:29 -- accel/accel.sh@20 -- # val= 00:05:03.762 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.762 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:03.762 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:03.762 21:19:29 -- accel/accel.sh@20 -- # val= 00:05:03.762 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.762 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:03.762 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:03.762 21:19:29 -- accel/accel.sh@20 -- # val= 00:05:03.762 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.762 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:03.762 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:03.762 21:19:29 -- accel/accel.sh@20 -- # val= 00:05:03.762 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.762 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:03.762 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:03.762 21:19:29 -- accel/accel.sh@20 -- # val= 00:05:03.762 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:03.762 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:03.762 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:03.762 21:19:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:03.762 21:19:29 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:03.762 21:19:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:03.762 00:05:03.762 real 0m1.482s 00:05:03.762 user 0m1.332s 00:05:03.762 sys 0m0.150s 00:05:03.762 21:19:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:03.762 21:19:29 -- common/autotest_common.sh@10 -- # set +x 00:05:03.762 ************************************ 00:05:03.762 END TEST accel_compare 00:05:03.762 ************************************ 00:05:04.021 21:19:29 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:04.021 21:19:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:04.021 21:19:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.021 21:19:29 -- common/autotest_common.sh@10 -- # set +x 00:05:04.021 ************************************ 00:05:04.021 START TEST accel_xor 00:05:04.021 ************************************ 00:05:04.021 21:19:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:05:04.021 21:19:29 -- accel/accel.sh@16 -- # local accel_opc 00:05:04.021 21:19:29 -- accel/accel.sh@17 -- # local accel_module 00:05:04.021 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:04.021 21:19:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:04.021 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:04.021 21:19:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:04.021 21:19:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:04.021 21:19:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:04.021 21:19:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:04.021 21:19:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:04.021 21:19:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:04.021 21:19:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:04.021 21:19:29 -- accel/accel.sh@40 -- # local IFS=, 00:05:04.021 21:19:29 -- accel/accel.sh@41 -- # jq -r . 00:05:04.021 [2024-04-24 21:19:29.581133] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:04.021 [2024-04-24 21:19:29.581192] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2494561 ] 00:05:04.021 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.021 [2024-04-24 21:19:29.645877] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.281 [2024-04-24 21:19:29.764783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.281 21:19:29 -- accel/accel.sh@20 -- # val= 00:05:04.281 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:04.281 21:19:29 -- accel/accel.sh@20 -- # val= 00:05:04.281 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:04.281 21:19:29 -- accel/accel.sh@20 -- # val=0x1 00:05:04.281 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:04.281 21:19:29 -- accel/accel.sh@20 -- # val= 00:05:04.281 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:04.281 21:19:29 -- accel/accel.sh@20 -- # val= 00:05:04.281 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:04.281 21:19:29 -- accel/accel.sh@20 -- # val=xor 00:05:04.281 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.281 21:19:29 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:04.281 21:19:29 -- accel/accel.sh@20 -- # val=2 00:05:04.281 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:04.281 21:19:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:04.281 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:04.281 21:19:29 -- accel/accel.sh@20 -- # val= 00:05:04.281 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:04.281 21:19:29 -- accel/accel.sh@20 -- # val=software 00:05:04.281 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.281 21:19:29 -- accel/accel.sh@22 -- # accel_module=software 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:04.281 21:19:29 -- accel/accel.sh@20 -- # val=32 00:05:04.281 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:04.281 21:19:29 -- accel/accel.sh@20 -- # val=32 00:05:04.281 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:04.281 21:19:29 -- accel/accel.sh@20 -- # val=1 00:05:04.281 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:04.281 21:19:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:04.281 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:04.281 21:19:29 -- accel/accel.sh@20 -- # val=Yes 00:05:04.281 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:04.281 21:19:29 -- accel/accel.sh@20 -- # val= 00:05:04.281 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:04.281 21:19:29 -- accel/accel.sh@20 -- # val= 00:05:04.281 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:05:04.281 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:05:05.675 21:19:31 -- accel/accel.sh@20 -- # val= 00:05:05.675 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.675 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.675 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.675 21:19:31 -- accel/accel.sh@20 -- # val= 00:05:05.675 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.675 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.675 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.675 21:19:31 -- accel/accel.sh@20 -- # val= 00:05:05.675 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.675 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.675 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.675 21:19:31 -- accel/accel.sh@20 -- # val= 00:05:05.675 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.675 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.675 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.675 21:19:31 -- accel/accel.sh@20 -- # val= 00:05:05.675 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.675 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.675 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.675 21:19:31 -- accel/accel.sh@20 -- # val= 00:05:05.675 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.675 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.675 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.675 21:19:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:05.675 21:19:31 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:05.675 21:19:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:05.675 00:05:05.675 real 0m1.479s 00:05:05.675 user 0m1.333s 00:05:05.675 sys 0m0.146s 00:05:05.675 21:19:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.675 21:19:31 -- common/autotest_common.sh@10 -- # set +x 00:05:05.675 ************************************ 00:05:05.675 END TEST accel_xor 00:05:05.675 ************************************ 00:05:05.675 21:19:31 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:05.675 21:19:31 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:05.675 21:19:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.675 21:19:31 -- common/autotest_common.sh@10 -- # set +x 00:05:05.675 ************************************ 00:05:05.675 START TEST accel_xor 00:05:05.675 ************************************ 00:05:05.675 21:19:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:05:05.675 21:19:31 -- accel/accel.sh@16 -- # local accel_opc 00:05:05.675 21:19:31 -- accel/accel.sh@17 -- # local accel_module 00:05:05.675 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.675 21:19:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:05.675 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.675 21:19:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:05.675 21:19:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:05.675 21:19:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.675 21:19:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.675 21:19:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.675 21:19:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.675 21:19:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.675 21:19:31 -- accel/accel.sh@40 -- # local IFS=, 00:05:05.676 21:19:31 -- accel/accel.sh@41 -- # jq -r . 00:05:05.676 [2024-04-24 21:19:31.186320] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:05.676 [2024-04-24 21:19:31.186384] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2494836 ] 00:05:05.676 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.676 [2024-04-24 21:19:31.246771] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.936 [2024-04-24 21:19:31.366544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.936 21:19:31 -- accel/accel.sh@20 -- # val= 00:05:05.936 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.936 21:19:31 -- accel/accel.sh@20 -- # val= 00:05:05.936 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.936 21:19:31 -- accel/accel.sh@20 -- # val=0x1 00:05:05.936 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.936 21:19:31 -- accel/accel.sh@20 -- # val= 00:05:05.936 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.936 21:19:31 -- accel/accel.sh@20 -- # val= 00:05:05.936 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.936 21:19:31 -- accel/accel.sh@20 -- # val=xor 00:05:05.936 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.936 21:19:31 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.936 21:19:31 -- accel/accel.sh@20 -- # val=3 00:05:05.936 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.936 21:19:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:05.936 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.936 21:19:31 -- accel/accel.sh@20 -- # val= 00:05:05.936 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.936 21:19:31 -- accel/accel.sh@20 -- # val=software 00:05:05.936 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.936 21:19:31 -- accel/accel.sh@22 -- # accel_module=software 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.936 21:19:31 -- accel/accel.sh@20 -- # val=32 00:05:05.936 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.936 21:19:31 -- accel/accel.sh@20 -- # val=32 00:05:05.936 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.936 21:19:31 -- accel/accel.sh@20 -- # val=1 00:05:05.936 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.936 21:19:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:05.936 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.936 21:19:31 -- accel/accel.sh@20 -- # val=Yes 00:05:05.936 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.936 21:19:31 -- accel/accel.sh@20 -- # val= 00:05:05.936 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:05.936 21:19:31 -- accel/accel.sh@20 -- # val= 00:05:05.936 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:05:05.936 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:05:07.311 21:19:32 -- accel/accel.sh@20 -- # val= 00:05:07.311 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.311 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:05:07.311 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:05:07.311 21:19:32 -- accel/accel.sh@20 -- # val= 00:05:07.311 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.311 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:05:07.311 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:05:07.311 21:19:32 -- accel/accel.sh@20 -- # val= 00:05:07.311 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.311 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:05:07.311 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:05:07.311 21:19:32 -- accel/accel.sh@20 -- # val= 00:05:07.311 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.311 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:05:07.311 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:05:07.311 21:19:32 -- accel/accel.sh@20 -- # val= 00:05:07.311 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.311 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:05:07.311 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:05:07.311 21:19:32 -- accel/accel.sh@20 -- # val= 00:05:07.311 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.311 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:05:07.311 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:05:07.311 21:19:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:07.311 21:19:32 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:07.311 21:19:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:07.311 00:05:07.311 real 0m1.482s 00:05:07.311 user 0m1.335s 00:05:07.311 sys 0m0.148s 00:05:07.312 21:19:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:07.312 21:19:32 -- common/autotest_common.sh@10 -- # set +x 00:05:07.312 ************************************ 00:05:07.312 END TEST accel_xor 00:05:07.312 ************************************ 00:05:07.312 21:19:32 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:07.312 21:19:32 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:07.312 21:19:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.312 21:19:32 -- common/autotest_common.sh@10 -- # set +x 00:05:07.312 ************************************ 00:05:07.312 START TEST accel_dif_verify 00:05:07.312 ************************************ 00:05:07.312 21:19:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:05:07.312 21:19:32 -- accel/accel.sh@16 -- # local accel_opc 00:05:07.312 21:19:32 -- accel/accel.sh@17 -- # local accel_module 00:05:07.312 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:05:07.312 21:19:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:07.312 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:05:07.312 21:19:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:07.312 21:19:32 -- accel/accel.sh@12 -- # build_accel_config 00:05:07.312 21:19:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.312 21:19:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.312 21:19:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.312 21:19:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.312 21:19:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.312 21:19:32 -- accel/accel.sh@40 -- # local IFS=, 00:05:07.312 21:19:32 -- accel/accel.sh@41 -- # jq -r . 00:05:07.312 [2024-04-24 21:19:32.792489] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:07.312 [2024-04-24 21:19:32.792552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495006 ] 00:05:07.312 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.312 [2024-04-24 21:19:32.854034] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.312 [2024-04-24 21:19:32.974794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.586 21:19:33 -- accel/accel.sh@20 -- # val= 00:05:07.586 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.586 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.586 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.586 21:19:33 -- accel/accel.sh@20 -- # val= 00:05:07.586 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.586 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.586 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.586 21:19:33 -- accel/accel.sh@20 -- # val=0x1 00:05:07.586 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.586 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.586 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.586 21:19:33 -- accel/accel.sh@20 -- # val= 00:05:07.586 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.586 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.586 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.586 21:19:33 -- accel/accel.sh@20 -- # val= 00:05:07.586 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.586 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.586 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.586 21:19:33 -- accel/accel.sh@20 -- # val=dif_verify 00:05:07.586 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.586 21:19:33 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:07.586 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.586 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.586 21:19:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:07.586 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.586 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.586 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.586 21:19:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:07.586 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.586 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.586 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.586 21:19:33 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:07.586 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.586 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.586 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.586 21:19:33 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:07.586 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.586 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.586 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.587 21:19:33 -- accel/accel.sh@20 -- # val= 00:05:07.587 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.587 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.587 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.587 21:19:33 -- accel/accel.sh@20 -- # val=software 00:05:07.587 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.587 21:19:33 -- accel/accel.sh@22 -- # accel_module=software 00:05:07.587 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.587 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.587 21:19:33 -- accel/accel.sh@20 -- # val=32 00:05:07.587 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.587 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.587 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.587 21:19:33 -- accel/accel.sh@20 -- # val=32 00:05:07.587 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.587 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.587 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.587 21:19:33 -- accel/accel.sh@20 -- # val=1 00:05:07.587 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.587 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.587 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.587 21:19:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:07.587 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.587 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.587 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.587 21:19:33 -- accel/accel.sh@20 -- # val=No 00:05:07.587 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.587 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.587 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.587 21:19:33 -- accel/accel.sh@20 -- # val= 00:05:07.587 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.587 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.587 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.587 21:19:33 -- accel/accel.sh@20 -- # val= 00:05:07.587 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.587 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.587 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:05:08.959 21:19:34 -- accel/accel.sh@20 -- # val= 00:05:08.959 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.959 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:08.959 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:08.959 21:19:34 -- accel/accel.sh@20 -- # val= 00:05:08.959 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.959 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:08.959 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:08.959 21:19:34 -- accel/accel.sh@20 -- # val= 00:05:08.959 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.959 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:08.959 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:08.959 21:19:34 -- accel/accel.sh@20 -- # val= 00:05:08.959 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.959 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:08.959 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:08.959 21:19:34 -- accel/accel.sh@20 -- # val= 00:05:08.959 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.959 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:08.959 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:08.959 21:19:34 -- accel/accel.sh@20 -- # val= 00:05:08.959 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.959 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:08.959 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:08.959 21:19:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:08.959 21:19:34 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:08.960 21:19:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:08.960 00:05:08.960 real 0m1.478s 00:05:08.960 user 0m1.337s 00:05:08.960 sys 0m0.144s 00:05:08.960 21:19:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:08.960 21:19:34 -- common/autotest_common.sh@10 -- # set +x 00:05:08.960 ************************************ 00:05:08.960 END TEST accel_dif_verify 00:05:08.960 ************************************ 00:05:08.960 21:19:34 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:08.960 21:19:34 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:08.960 21:19:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.960 21:19:34 -- common/autotest_common.sh@10 -- # set +x 00:05:08.960 ************************************ 00:05:08.960 START TEST accel_dif_generate 00:05:08.960 ************************************ 00:05:08.960 21:19:34 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:05:08.960 21:19:34 -- accel/accel.sh@16 -- # local accel_opc 00:05:08.960 21:19:34 -- accel/accel.sh@17 -- # local accel_module 00:05:08.960 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:08.960 21:19:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:08.960 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:08.960 21:19:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:08.960 21:19:34 -- accel/accel.sh@12 -- # build_accel_config 00:05:08.960 21:19:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:08.960 21:19:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:08.960 21:19:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:08.960 21:19:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:08.960 21:19:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:08.960 21:19:34 -- accel/accel.sh@40 -- # local IFS=, 00:05:08.960 21:19:34 -- accel/accel.sh@41 -- # jq -r . 00:05:08.960 [2024-04-24 21:19:34.392848] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:08.960 [2024-04-24 21:19:34.392913] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495286 ] 00:05:08.960 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.960 [2024-04-24 21:19:34.454575] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.960 [2024-04-24 21:19:34.575647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.960 21:19:34 -- accel/accel.sh@20 -- # val= 00:05:08.960 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.960 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:08.960 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:08.960 21:19:34 -- accel/accel.sh@20 -- # val= 00:05:08.960 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:08.960 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:09.217 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:09.217 21:19:34 -- accel/accel.sh@20 -- # val=0x1 00:05:09.217 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.217 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:09.217 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:09.217 21:19:34 -- accel/accel.sh@20 -- # val= 00:05:09.217 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.217 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:09.217 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:09.217 21:19:34 -- accel/accel.sh@20 -- # val= 00:05:09.217 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.217 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:09.217 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:09.217 21:19:34 -- accel/accel.sh@20 -- # val=dif_generate 00:05:09.217 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.217 21:19:34 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:09.218 21:19:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.218 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:09.218 21:19:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.218 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:09.218 21:19:34 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:09.218 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:09.218 21:19:34 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:09.218 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:09.218 21:19:34 -- accel/accel.sh@20 -- # val= 00:05:09.218 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:09.218 21:19:34 -- accel/accel.sh@20 -- # val=software 00:05:09.218 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.218 21:19:34 -- accel/accel.sh@22 -- # accel_module=software 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:09.218 21:19:34 -- accel/accel.sh@20 -- # val=32 00:05:09.218 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:09.218 21:19:34 -- accel/accel.sh@20 -- # val=32 00:05:09.218 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:09.218 21:19:34 -- accel/accel.sh@20 -- # val=1 00:05:09.218 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:09.218 21:19:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:09.218 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:09.218 21:19:34 -- accel/accel.sh@20 -- # val=No 00:05:09.218 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:09.218 21:19:34 -- accel/accel.sh@20 -- # val= 00:05:09.218 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:09.218 21:19:34 -- accel/accel.sh@20 -- # val= 00:05:09.218 21:19:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # IFS=: 00:05:09.218 21:19:34 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:35 -- accel/accel.sh@20 -- # val= 00:05:10.590 21:19:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:35 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:35 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:35 -- accel/accel.sh@20 -- # val= 00:05:10.590 21:19:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:35 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:35 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:35 -- accel/accel.sh@20 -- # val= 00:05:10.590 21:19:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:35 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:35 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:35 -- accel/accel.sh@20 -- # val= 00:05:10.590 21:19:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:35 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:35 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:35 -- accel/accel.sh@20 -- # val= 00:05:10.590 21:19:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:35 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:35 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:35 -- accel/accel.sh@20 -- # val= 00:05:10.590 21:19:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:35 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:35 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:10.590 21:19:35 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:10.590 21:19:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:10.590 00:05:10.590 real 0m1.488s 00:05:10.590 user 0m1.351s 00:05:10.590 sys 0m0.140s 00:05:10.590 21:19:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:10.590 21:19:35 -- common/autotest_common.sh@10 -- # set +x 00:05:10.590 ************************************ 00:05:10.590 END TEST accel_dif_generate 00:05:10.590 ************************************ 00:05:10.590 21:19:35 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:10.590 21:19:35 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:10.590 21:19:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.590 21:19:35 -- common/autotest_common.sh@10 -- # set +x 00:05:10.590 ************************************ 00:05:10.590 START TEST accel_dif_generate_copy 00:05:10.590 ************************************ 00:05:10.590 21:19:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:05:10.590 21:19:35 -- accel/accel.sh@16 -- # local accel_opc 00:05:10.590 21:19:35 -- accel/accel.sh@17 -- # local accel_module 00:05:10.590 21:19:35 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:10.590 21:19:35 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:10.590 21:19:35 -- accel/accel.sh@12 -- # build_accel_config 00:05:10.590 21:19:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.590 21:19:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.590 21:19:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.590 21:19:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.590 21:19:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.590 21:19:35 -- accel/accel.sh@40 -- # local IFS=, 00:05:10.590 21:19:35 -- accel/accel.sh@41 -- # jq -r . 00:05:10.590 [2024-04-24 21:19:36.005414] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:10.590 [2024-04-24 21:19:36.005478] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495456 ] 00:05:10.590 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.590 [2024-04-24 21:19:36.070408] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.590 [2024-04-24 21:19:36.191125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.590 21:19:36 -- accel/accel.sh@20 -- # val= 00:05:10.590 21:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:36 -- accel/accel.sh@20 -- # val= 00:05:10.590 21:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:36 -- accel/accel.sh@20 -- # val=0x1 00:05:10.590 21:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:36 -- accel/accel.sh@20 -- # val= 00:05:10.590 21:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:36 -- accel/accel.sh@20 -- # val= 00:05:10.590 21:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:36 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:10.590 21:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:36 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:10.590 21:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:10.590 21:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:36 -- accel/accel.sh@20 -- # val= 00:05:10.590 21:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:36 -- accel/accel.sh@20 -- # val=software 00:05:10.590 21:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:36 -- accel/accel.sh@22 -- # accel_module=software 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:36 -- accel/accel.sh@20 -- # val=32 00:05:10.590 21:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:36 -- accel/accel.sh@20 -- # val=32 00:05:10.590 21:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:36 -- accel/accel.sh@20 -- # val=1 00:05:10.590 21:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:10.590 21:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:36 -- accel/accel.sh@20 -- # val=No 00:05:10.590 21:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:36 -- accel/accel.sh@20 -- # val= 00:05:10.590 21:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.590 21:19:36 -- accel/accel.sh@20 -- # val= 00:05:10.590 21:19:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.590 21:19:36 -- accel/accel.sh@19 -- # read -r var val 00:05:11.962 21:19:37 -- accel/accel.sh@20 -- # val= 00:05:11.962 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.962 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:11.962 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:11.962 21:19:37 -- accel/accel.sh@20 -- # val= 00:05:11.962 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.962 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:11.962 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:11.962 21:19:37 -- accel/accel.sh@20 -- # val= 00:05:11.962 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.962 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:11.962 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:11.962 21:19:37 -- accel/accel.sh@20 -- # val= 00:05:11.962 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.962 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:11.962 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:11.962 21:19:37 -- accel/accel.sh@20 -- # val= 00:05:11.962 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.962 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:11.962 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:11.962 21:19:37 -- accel/accel.sh@20 -- # val= 00:05:11.962 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.962 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:11.962 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:11.962 21:19:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:11.962 21:19:37 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:11.962 21:19:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:11.962 00:05:11.962 real 0m1.491s 00:05:11.962 user 0m1.349s 00:05:11.962 sys 0m0.144s 00:05:11.962 21:19:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:11.962 21:19:37 -- common/autotest_common.sh@10 -- # set +x 00:05:11.962 ************************************ 00:05:11.962 END TEST accel_dif_generate_copy 00:05:11.962 ************************************ 00:05:11.962 21:19:37 -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:11.962 21:19:37 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:11.962 21:19:37 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:11.962 21:19:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.962 21:19:37 -- common/autotest_common.sh@10 -- # set +x 00:05:11.962 ************************************ 00:05:11.962 START TEST accel_comp 00:05:11.962 ************************************ 00:05:11.962 21:19:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:11.962 21:19:37 -- accel/accel.sh@16 -- # local accel_opc 00:05:11.962 21:19:37 -- accel/accel.sh@17 -- # local accel_module 00:05:11.962 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:11.962 21:19:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:11.962 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:11.962 21:19:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:11.962 21:19:37 -- accel/accel.sh@12 -- # build_accel_config 00:05:11.962 21:19:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.962 21:19:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.962 21:19:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.962 21:19:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.962 21:19:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.962 21:19:37 -- accel/accel.sh@40 -- # local IFS=, 00:05:11.962 21:19:37 -- accel/accel.sh@41 -- # jq -r . 00:05:11.962 [2024-04-24 21:19:37.617890] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:11.962 [2024-04-24 21:19:37.617956] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495616 ] 00:05:12.220 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.220 [2024-04-24 21:19:37.683743] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.220 [2024-04-24 21:19:37.804857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.220 21:19:37 -- accel/accel.sh@20 -- # val= 00:05:12.221 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:12.221 21:19:37 -- accel/accel.sh@20 -- # val= 00:05:12.221 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:12.221 21:19:37 -- accel/accel.sh@20 -- # val= 00:05:12.221 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:12.221 21:19:37 -- accel/accel.sh@20 -- # val=0x1 00:05:12.221 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:12.221 21:19:37 -- accel/accel.sh@20 -- # val= 00:05:12.221 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:12.221 21:19:37 -- accel/accel.sh@20 -- # val= 00:05:12.221 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:12.221 21:19:37 -- accel/accel.sh@20 -- # val=compress 00:05:12.221 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.221 21:19:37 -- accel/accel.sh@23 -- # accel_opc=compress 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:12.221 21:19:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.221 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:12.221 21:19:37 -- accel/accel.sh@20 -- # val= 00:05:12.221 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:12.221 21:19:37 -- accel/accel.sh@20 -- # val=software 00:05:12.221 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.221 21:19:37 -- accel/accel.sh@22 -- # accel_module=software 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:12.221 21:19:37 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:12.221 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:12.221 21:19:37 -- accel/accel.sh@20 -- # val=32 00:05:12.221 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:12.221 21:19:37 -- accel/accel.sh@20 -- # val=32 00:05:12.221 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:12.221 21:19:37 -- accel/accel.sh@20 -- # val=1 00:05:12.221 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:12.221 21:19:37 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:12.221 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:12.221 21:19:37 -- accel/accel.sh@20 -- # val=No 00:05:12.221 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:12.221 21:19:37 -- accel/accel.sh@20 -- # val= 00:05:12.221 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:12.221 21:19:37 -- accel/accel.sh@20 -- # val= 00:05:12.221 21:19:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # IFS=: 00:05:12.221 21:19:37 -- accel/accel.sh@19 -- # read -r var val 00:05:13.595 21:19:39 -- accel/accel.sh@20 -- # val= 00:05:13.595 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.595 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.595 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.595 21:19:39 -- accel/accel.sh@20 -- # val= 00:05:13.595 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.595 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.595 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.595 21:19:39 -- accel/accel.sh@20 -- # val= 00:05:13.595 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.595 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.595 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.595 21:19:39 -- accel/accel.sh@20 -- # val= 00:05:13.595 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.595 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.595 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.595 21:19:39 -- accel/accel.sh@20 -- # val= 00:05:13.595 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.595 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.595 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.595 21:19:39 -- accel/accel.sh@20 -- # val= 00:05:13.595 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.595 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.595 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.595 21:19:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:13.595 21:19:39 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:13.595 21:19:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:13.595 00:05:13.595 real 0m1.494s 00:05:13.595 user 0m1.348s 00:05:13.595 sys 0m0.148s 00:05:13.595 21:19:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:13.595 21:19:39 -- common/autotest_common.sh@10 -- # set +x 00:05:13.595 ************************************ 00:05:13.595 END TEST accel_comp 00:05:13.595 ************************************ 00:05:13.595 21:19:39 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:13.595 21:19:39 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:13.595 21:19:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.595 21:19:39 -- common/autotest_common.sh@10 -- # set +x 00:05:13.595 ************************************ 00:05:13.595 START TEST accel_decomp 00:05:13.595 ************************************ 00:05:13.595 21:19:39 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:13.595 21:19:39 -- accel/accel.sh@16 -- # local accel_opc 00:05:13.595 21:19:39 -- accel/accel.sh@17 -- # local accel_module 00:05:13.595 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.595 21:19:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:13.595 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.595 21:19:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:13.595 21:19:39 -- accel/accel.sh@12 -- # build_accel_config 00:05:13.595 21:19:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:13.595 21:19:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:13.595 21:19:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.595 21:19:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.595 21:19:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:13.595 21:19:39 -- accel/accel.sh@40 -- # local IFS=, 00:05:13.595 21:19:39 -- accel/accel.sh@41 -- # jq -r . 00:05:13.595 [2024-04-24 21:19:39.234085] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:13.595 [2024-04-24 21:19:39.234148] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495899 ] 00:05:13.595 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.854 [2024-04-24 21:19:39.300519] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.854 [2024-04-24 21:19:39.421330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.854 21:19:39 -- accel/accel.sh@20 -- # val= 00:05:13.854 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.854 21:19:39 -- accel/accel.sh@20 -- # val= 00:05:13.854 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.854 21:19:39 -- accel/accel.sh@20 -- # val= 00:05:13.854 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.854 21:19:39 -- accel/accel.sh@20 -- # val=0x1 00:05:13.854 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.854 21:19:39 -- accel/accel.sh@20 -- # val= 00:05:13.854 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.854 21:19:39 -- accel/accel.sh@20 -- # val= 00:05:13.854 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.854 21:19:39 -- accel/accel.sh@20 -- # val=decompress 00:05:13.854 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.854 21:19:39 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.854 21:19:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:13.854 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.854 21:19:39 -- accel/accel.sh@20 -- # val= 00:05:13.854 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.854 21:19:39 -- accel/accel.sh@20 -- # val=software 00:05:13.854 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.854 21:19:39 -- accel/accel.sh@22 -- # accel_module=software 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.854 21:19:39 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:13.854 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.854 21:19:39 -- accel/accel.sh@20 -- # val=32 00:05:13.854 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.854 21:19:39 -- accel/accel.sh@20 -- # val=32 00:05:13.854 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.854 21:19:39 -- accel/accel.sh@20 -- # val=1 00:05:13.854 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.854 21:19:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:13.854 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.854 21:19:39 -- accel/accel.sh@20 -- # val=Yes 00:05:13.854 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.854 21:19:39 -- accel/accel.sh@20 -- # val= 00:05:13.854 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.854 21:19:39 -- accel/accel.sh@20 -- # val= 00:05:13.854 21:19:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.854 21:19:39 -- accel/accel.sh@19 -- # read -r var val 00:05:15.258 21:19:40 -- accel/accel.sh@20 -- # val= 00:05:15.258 21:19:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.258 21:19:40 -- accel/accel.sh@19 -- # IFS=: 00:05:15.258 21:19:40 -- accel/accel.sh@19 -- # read -r var val 00:05:15.258 21:19:40 -- accel/accel.sh@20 -- # val= 00:05:15.258 21:19:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.258 21:19:40 -- accel/accel.sh@19 -- # IFS=: 00:05:15.258 21:19:40 -- accel/accel.sh@19 -- # read -r var val 00:05:15.258 21:19:40 -- accel/accel.sh@20 -- # val= 00:05:15.258 21:19:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.258 21:19:40 -- accel/accel.sh@19 -- # IFS=: 00:05:15.258 21:19:40 -- accel/accel.sh@19 -- # read -r var val 00:05:15.258 21:19:40 -- accel/accel.sh@20 -- # val= 00:05:15.258 21:19:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.258 21:19:40 -- accel/accel.sh@19 -- # IFS=: 00:05:15.258 21:19:40 -- accel/accel.sh@19 -- # read -r var val 00:05:15.258 21:19:40 -- accel/accel.sh@20 -- # val= 00:05:15.258 21:19:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.258 21:19:40 -- accel/accel.sh@19 -- # IFS=: 00:05:15.258 21:19:40 -- accel/accel.sh@19 -- # read -r var val 00:05:15.258 21:19:40 -- accel/accel.sh@20 -- # val= 00:05:15.258 21:19:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.258 21:19:40 -- accel/accel.sh@19 -- # IFS=: 00:05:15.258 21:19:40 -- accel/accel.sh@19 -- # read -r var val 00:05:15.258 21:19:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:15.258 21:19:40 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:15.258 21:19:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.258 00:05:15.258 real 0m1.475s 00:05:15.258 user 0m1.339s 00:05:15.258 sys 0m0.138s 00:05:15.258 21:19:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:15.258 21:19:40 -- common/autotest_common.sh@10 -- # set +x 00:05:15.258 ************************************ 00:05:15.258 END TEST accel_decomp 00:05:15.258 ************************************ 00:05:15.258 21:19:40 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:15.258 21:19:40 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:15.258 21:19:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.258 21:19:40 -- common/autotest_common.sh@10 -- # set +x 00:05:15.258 ************************************ 00:05:15.258 START TEST accel_decmop_full 00:05:15.258 ************************************ 00:05:15.258 21:19:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:15.258 21:19:40 -- accel/accel.sh@16 -- # local accel_opc 00:05:15.258 21:19:40 -- accel/accel.sh@17 -- # local accel_module 00:05:15.258 21:19:40 -- accel/accel.sh@19 -- # IFS=: 00:05:15.258 21:19:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:15.258 21:19:40 -- accel/accel.sh@19 -- # read -r var val 00:05:15.258 21:19:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:15.258 21:19:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.258 21:19:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:15.258 21:19:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:15.258 21:19:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.258 21:19:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.258 21:19:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:15.258 21:19:40 -- accel/accel.sh@40 -- # local IFS=, 00:05:15.258 21:19:40 -- accel/accel.sh@41 -- # jq -r . 00:05:15.258 [2024-04-24 21:19:40.834965] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:15.258 [2024-04-24 21:19:40.835030] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496067 ] 00:05:15.258 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.258 [2024-04-24 21:19:40.897035] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.517 [2024-04-24 21:19:41.021894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.517 21:19:41 -- accel/accel.sh@20 -- # val= 00:05:15.517 21:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.517 21:19:41 -- accel/accel.sh@20 -- # val= 00:05:15.517 21:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.517 21:19:41 -- accel/accel.sh@20 -- # val= 00:05:15.517 21:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.517 21:19:41 -- accel/accel.sh@20 -- # val=0x1 00:05:15.517 21:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.517 21:19:41 -- accel/accel.sh@20 -- # val= 00:05:15.517 21:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.517 21:19:41 -- accel/accel.sh@20 -- # val= 00:05:15.517 21:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.517 21:19:41 -- accel/accel.sh@20 -- # val=decompress 00:05:15.517 21:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.517 21:19:41 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.517 21:19:41 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:15.517 21:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.517 21:19:41 -- accel/accel.sh@20 -- # val= 00:05:15.517 21:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.517 21:19:41 -- accel/accel.sh@20 -- # val=software 00:05:15.517 21:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.517 21:19:41 -- accel/accel.sh@22 -- # accel_module=software 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.517 21:19:41 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:15.517 21:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.517 21:19:41 -- accel/accel.sh@20 -- # val=32 00:05:15.517 21:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.517 21:19:41 -- accel/accel.sh@20 -- # val=32 00:05:15.517 21:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.517 21:19:41 -- accel/accel.sh@20 -- # val=1 00:05:15.517 21:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.517 21:19:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:15.517 21:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.517 21:19:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.517 21:19:41 -- accel/accel.sh@20 -- # val=Yes 00:05:15.517 21:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.518 21:19:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.518 21:19:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.518 21:19:41 -- accel/accel.sh@20 -- # val= 00:05:15.518 21:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.518 21:19:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.518 21:19:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.518 21:19:41 -- accel/accel.sh@20 -- # val= 00:05:15.518 21:19:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.518 21:19:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.518 21:19:41 -- accel/accel.sh@19 -- # read -r var val 00:05:16.892 21:19:42 -- accel/accel.sh@20 -- # val= 00:05:16.892 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.892 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:16.893 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:16.893 21:19:42 -- accel/accel.sh@20 -- # val= 00:05:16.893 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.893 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:16.893 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:16.893 21:19:42 -- accel/accel.sh@20 -- # val= 00:05:16.893 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.893 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:16.893 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:16.893 21:19:42 -- accel/accel.sh@20 -- # val= 00:05:16.893 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.893 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:16.893 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:16.893 21:19:42 -- accel/accel.sh@20 -- # val= 00:05:16.893 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.893 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:16.893 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:16.893 21:19:42 -- accel/accel.sh@20 -- # val= 00:05:16.893 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.893 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:16.893 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:16.893 21:19:42 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:16.893 21:19:42 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:16.893 21:19:42 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:16.893 00:05:16.893 real 0m1.495s 00:05:16.893 user 0m1.357s 00:05:16.893 sys 0m0.139s 00:05:16.893 21:19:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:16.893 21:19:42 -- common/autotest_common.sh@10 -- # set +x 00:05:16.893 ************************************ 00:05:16.893 END TEST accel_decmop_full 00:05:16.893 ************************************ 00:05:16.893 21:19:42 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:16.893 21:19:42 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:16.893 21:19:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.893 21:19:42 -- common/autotest_common.sh@10 -- # set +x 00:05:16.893 ************************************ 00:05:16.893 START TEST accel_decomp_mcore 00:05:16.893 ************************************ 00:05:16.893 21:19:42 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:16.893 21:19:42 -- accel/accel.sh@16 -- # local accel_opc 00:05:16.893 21:19:42 -- accel/accel.sh@17 -- # local accel_module 00:05:16.893 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:16.893 21:19:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:16.893 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:16.893 21:19:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:16.893 21:19:42 -- accel/accel.sh@12 -- # build_accel_config 00:05:16.893 21:19:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:16.893 21:19:42 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:16.893 21:19:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.893 21:19:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.893 21:19:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:16.893 21:19:42 -- accel/accel.sh@40 -- # local IFS=, 00:05:16.893 21:19:42 -- accel/accel.sh@41 -- # jq -r . 00:05:16.893 [2024-04-24 21:19:42.453644] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:16.893 [2024-04-24 21:19:42.453718] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496346 ] 00:05:16.893 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.893 [2024-04-24 21:19:42.515452] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:17.152 [2024-04-24 21:19:42.642168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.152 [2024-04-24 21:19:42.642223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.152 [2024-04-24 21:19:42.642277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.152 [2024-04-24 21:19:42.642280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.152 21:19:42 -- accel/accel.sh@20 -- # val= 00:05:17.152 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:17.152 21:19:42 -- accel/accel.sh@20 -- # val= 00:05:17.152 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:17.152 21:19:42 -- accel/accel.sh@20 -- # val= 00:05:17.152 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:17.152 21:19:42 -- accel/accel.sh@20 -- # val=0xf 00:05:17.152 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:17.152 21:19:42 -- accel/accel.sh@20 -- # val= 00:05:17.152 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:17.152 21:19:42 -- accel/accel.sh@20 -- # val= 00:05:17.152 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:17.152 21:19:42 -- accel/accel.sh@20 -- # val=decompress 00:05:17.152 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.152 21:19:42 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:17.152 21:19:42 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:17.152 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:17.152 21:19:42 -- accel/accel.sh@20 -- # val= 00:05:17.152 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:17.152 21:19:42 -- accel/accel.sh@20 -- # val=software 00:05:17.152 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.152 21:19:42 -- accel/accel.sh@22 -- # accel_module=software 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:17.152 21:19:42 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:17.152 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:17.152 21:19:42 -- accel/accel.sh@20 -- # val=32 00:05:17.152 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:17.152 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:17.152 21:19:42 -- accel/accel.sh@20 -- # val=32 00:05:17.152 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.153 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:17.153 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:17.153 21:19:42 -- accel/accel.sh@20 -- # val=1 00:05:17.153 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.153 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:17.153 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:17.153 21:19:42 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:17.153 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.153 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:17.153 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:17.153 21:19:42 -- accel/accel.sh@20 -- # val=Yes 00:05:17.153 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.153 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:17.153 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:17.153 21:19:42 -- accel/accel.sh@20 -- # val= 00:05:17.153 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.153 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:17.153 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:17.153 21:19:42 -- accel/accel.sh@20 -- # val= 00:05:17.153 21:19:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.153 21:19:42 -- accel/accel.sh@19 -- # IFS=: 00:05:17.153 21:19:42 -- accel/accel.sh@19 -- # read -r var val 00:05:18.526 21:19:43 -- accel/accel.sh@20 -- # val= 00:05:18.526 21:19:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.526 21:19:43 -- accel/accel.sh@19 -- # IFS=: 00:05:18.526 21:19:43 -- accel/accel.sh@19 -- # read -r var val 00:05:18.526 21:19:43 -- accel/accel.sh@20 -- # val= 00:05:18.526 21:19:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.526 21:19:43 -- accel/accel.sh@19 -- # IFS=: 00:05:18.526 21:19:43 -- accel/accel.sh@19 -- # read -r var val 00:05:18.526 21:19:43 -- accel/accel.sh@20 -- # val= 00:05:18.526 21:19:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.526 21:19:43 -- accel/accel.sh@19 -- # IFS=: 00:05:18.526 21:19:43 -- accel/accel.sh@19 -- # read -r var val 00:05:18.526 21:19:43 -- accel/accel.sh@20 -- # val= 00:05:18.526 21:19:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.526 21:19:43 -- accel/accel.sh@19 -- # IFS=: 00:05:18.526 21:19:43 -- accel/accel.sh@19 -- # read -r var val 00:05:18.526 21:19:43 -- accel/accel.sh@20 -- # val= 00:05:18.526 21:19:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.526 21:19:43 -- accel/accel.sh@19 -- # IFS=: 00:05:18.526 21:19:43 -- accel/accel.sh@19 -- # read -r var val 00:05:18.526 21:19:43 -- accel/accel.sh@20 -- # val= 00:05:18.526 21:19:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.526 21:19:43 -- accel/accel.sh@19 -- # IFS=: 00:05:18.526 21:19:43 -- accel/accel.sh@19 -- # read -r var val 00:05:18.526 21:19:43 -- accel/accel.sh@20 -- # val= 00:05:18.526 21:19:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.526 21:19:43 -- accel/accel.sh@19 -- # IFS=: 00:05:18.526 21:19:43 -- accel/accel.sh@19 -- # read -r var val 00:05:18.526 21:19:43 -- accel/accel.sh@20 -- # val= 00:05:18.526 21:19:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.526 21:19:43 -- accel/accel.sh@19 -- # IFS=: 00:05:18.526 21:19:43 -- accel/accel.sh@19 -- # read -r var val 00:05:18.526 21:19:43 -- accel/accel.sh@20 -- # val= 00:05:18.526 21:19:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.526 21:19:43 -- accel/accel.sh@19 -- # IFS=: 00:05:18.526 21:19:43 -- accel/accel.sh@19 -- # read -r var val 00:05:18.526 21:19:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:18.526 21:19:43 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:18.526 21:19:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.526 00:05:18.526 real 0m1.498s 00:05:18.526 user 0m4.808s 00:05:18.526 sys 0m0.157s 00:05:18.526 21:19:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.526 21:19:43 -- common/autotest_common.sh@10 -- # set +x 00:05:18.526 ************************************ 00:05:18.526 END TEST accel_decomp_mcore 00:05:18.526 ************************************ 00:05:18.526 21:19:43 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:18.526 21:19:43 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:18.526 21:19:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.526 21:19:43 -- common/autotest_common.sh@10 -- # set +x 00:05:18.526 ************************************ 00:05:18.526 START TEST accel_decomp_full_mcore 00:05:18.526 ************************************ 00:05:18.526 21:19:44 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:18.526 21:19:44 -- accel/accel.sh@16 -- # local accel_opc 00:05:18.526 21:19:44 -- accel/accel.sh@17 -- # local accel_module 00:05:18.526 21:19:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.526 21:19:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:18.526 21:19:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.526 21:19:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:18.526 21:19:44 -- accel/accel.sh@12 -- # build_accel_config 00:05:18.526 21:19:44 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.526 21:19:44 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.526 21:19:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.526 21:19:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.526 21:19:44 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.526 21:19:44 -- accel/accel.sh@40 -- # local IFS=, 00:05:18.526 21:19:44 -- accel/accel.sh@41 -- # jq -r . 00:05:18.526 [2024-04-24 21:19:44.081120] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:18.526 [2024-04-24 21:19:44.081183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496518 ] 00:05:18.526 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.526 [2024-04-24 21:19:44.142787] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:18.785 [2024-04-24 21:19:44.263876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.785 [2024-04-24 21:19:44.263931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.785 [2024-04-24 21:19:44.263982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:18.785 [2024-04-24 21:19:44.263985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.785 21:19:44 -- accel/accel.sh@20 -- # val= 00:05:18.785 21:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.785 21:19:44 -- accel/accel.sh@20 -- # val= 00:05:18.785 21:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.785 21:19:44 -- accel/accel.sh@20 -- # val= 00:05:18.785 21:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.785 21:19:44 -- accel/accel.sh@20 -- # val=0xf 00:05:18.785 21:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.785 21:19:44 -- accel/accel.sh@20 -- # val= 00:05:18.785 21:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.785 21:19:44 -- accel/accel.sh@20 -- # val= 00:05:18.785 21:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.785 21:19:44 -- accel/accel.sh@20 -- # val=decompress 00:05:18.785 21:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.785 21:19:44 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.785 21:19:44 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:18.785 21:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.785 21:19:44 -- accel/accel.sh@20 -- # val= 00:05:18.785 21:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.785 21:19:44 -- accel/accel.sh@20 -- # val=software 00:05:18.785 21:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.785 21:19:44 -- accel/accel.sh@22 -- # accel_module=software 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.785 21:19:44 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:18.785 21:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.785 21:19:44 -- accel/accel.sh@20 -- # val=32 00:05:18.785 21:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.785 21:19:44 -- accel/accel.sh@20 -- # val=32 00:05:18.785 21:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.785 21:19:44 -- accel/accel.sh@20 -- # val=1 00:05:18.785 21:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.785 21:19:44 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:18.785 21:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.785 21:19:44 -- accel/accel.sh@20 -- # val=Yes 00:05:18.785 21:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.785 21:19:44 -- accel/accel.sh@20 -- # val= 00:05:18.785 21:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.785 21:19:44 -- accel/accel.sh@20 -- # val= 00:05:18.785 21:19:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.785 21:19:44 -- accel/accel.sh@19 -- # read -r var val 00:05:20.161 21:19:45 -- accel/accel.sh@20 -- # val= 00:05:20.161 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.161 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.161 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.161 21:19:45 -- accel/accel.sh@20 -- # val= 00:05:20.161 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.161 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.161 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.161 21:19:45 -- accel/accel.sh@20 -- # val= 00:05:20.161 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.161 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.161 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.161 21:19:45 -- accel/accel.sh@20 -- # val= 00:05:20.161 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.161 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.161 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.161 21:19:45 -- accel/accel.sh@20 -- # val= 00:05:20.161 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.161 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.161 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.161 21:19:45 -- accel/accel.sh@20 -- # val= 00:05:20.161 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.161 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.161 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.161 21:19:45 -- accel/accel.sh@20 -- # val= 00:05:20.162 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.162 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.162 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.162 21:19:45 -- accel/accel.sh@20 -- # val= 00:05:20.162 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.162 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.162 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.162 21:19:45 -- accel/accel.sh@20 -- # val= 00:05:20.162 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.162 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.162 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.162 21:19:45 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.162 21:19:45 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:20.162 21:19:45 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.162 00:05:20.162 real 0m1.509s 00:05:20.162 user 0m4.881s 00:05:20.162 sys 0m0.143s 00:05:20.162 21:19:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:20.162 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:05:20.162 ************************************ 00:05:20.162 END TEST accel_decomp_full_mcore 00:05:20.162 ************************************ 00:05:20.162 21:19:45 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:20.162 21:19:45 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:20.162 21:19:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.162 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:05:20.162 ************************************ 00:05:20.162 START TEST accel_decomp_mthread 00:05:20.162 ************************************ 00:05:20.162 21:19:45 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:20.162 21:19:45 -- accel/accel.sh@16 -- # local accel_opc 00:05:20.162 21:19:45 -- accel/accel.sh@17 -- # local accel_module 00:05:20.162 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.162 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.162 21:19:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:20.162 21:19:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:20.162 21:19:45 -- accel/accel.sh@12 -- # build_accel_config 00:05:20.162 21:19:45 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.162 21:19:45 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.162 21:19:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.162 21:19:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.162 21:19:45 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.162 21:19:45 -- accel/accel.sh@40 -- # local IFS=, 00:05:20.162 21:19:45 -- accel/accel.sh@41 -- # jq -r . 00:05:20.162 [2024-04-24 21:19:45.712099] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:20.162 [2024-04-24 21:19:45.712164] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496800 ] 00:05:20.162 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.162 [2024-04-24 21:19:45.777858] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.421 [2024-04-24 21:19:45.899814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.421 21:19:45 -- accel/accel.sh@20 -- # val= 00:05:20.421 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.421 21:19:45 -- accel/accel.sh@20 -- # val= 00:05:20.421 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.421 21:19:45 -- accel/accel.sh@20 -- # val= 00:05:20.421 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.421 21:19:45 -- accel/accel.sh@20 -- # val=0x1 00:05:20.421 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.421 21:19:45 -- accel/accel.sh@20 -- # val= 00:05:20.421 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.421 21:19:45 -- accel/accel.sh@20 -- # val= 00:05:20.421 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.421 21:19:45 -- accel/accel.sh@20 -- # val=decompress 00:05:20.421 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.421 21:19:45 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.421 21:19:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.421 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.421 21:19:45 -- accel/accel.sh@20 -- # val= 00:05:20.421 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.421 21:19:45 -- accel/accel.sh@20 -- # val=software 00:05:20.421 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.421 21:19:45 -- accel/accel.sh@22 -- # accel_module=software 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.421 21:19:45 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:20.421 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.421 21:19:45 -- accel/accel.sh@20 -- # val=32 00:05:20.421 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.421 21:19:45 -- accel/accel.sh@20 -- # val=32 00:05:20.421 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.421 21:19:45 -- accel/accel.sh@20 -- # val=2 00:05:20.421 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.421 21:19:45 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.421 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.421 21:19:45 -- accel/accel.sh@20 -- # val=Yes 00:05:20.421 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.421 21:19:45 -- accel/accel.sh@20 -- # val= 00:05:20.421 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.421 21:19:45 -- accel/accel.sh@20 -- # val= 00:05:20.421 21:19:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # IFS=: 00:05:20.421 21:19:45 -- accel/accel.sh@19 -- # read -r var val 00:05:21.795 21:19:47 -- accel/accel.sh@20 -- # val= 00:05:21.795 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.795 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:21.795 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:21.795 21:19:47 -- accel/accel.sh@20 -- # val= 00:05:21.795 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.795 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:21.795 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:21.795 21:19:47 -- accel/accel.sh@20 -- # val= 00:05:21.795 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.795 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:21.795 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:21.795 21:19:47 -- accel/accel.sh@20 -- # val= 00:05:21.795 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.795 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:21.795 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:21.795 21:19:47 -- accel/accel.sh@20 -- # val= 00:05:21.795 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.795 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:21.795 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:21.795 21:19:47 -- accel/accel.sh@20 -- # val= 00:05:21.795 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.795 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:21.795 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:21.795 21:19:47 -- accel/accel.sh@20 -- # val= 00:05:21.795 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.795 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:21.795 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:21.795 21:19:47 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:21.795 21:19:47 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:21.795 21:19:47 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.795 00:05:21.795 real 0m1.499s 00:05:21.795 user 0m1.349s 00:05:21.795 sys 0m0.152s 00:05:21.795 21:19:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:21.795 21:19:47 -- common/autotest_common.sh@10 -- # set +x 00:05:21.795 ************************************ 00:05:21.795 END TEST accel_decomp_mthread 00:05:21.795 ************************************ 00:05:21.795 21:19:47 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:21.795 21:19:47 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:21.795 21:19:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.795 21:19:47 -- common/autotest_common.sh@10 -- # set +x 00:05:21.795 ************************************ 00:05:21.795 START TEST accel_deomp_full_mthread 00:05:21.795 ************************************ 00:05:21.795 21:19:47 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:21.795 21:19:47 -- accel/accel.sh@16 -- # local accel_opc 00:05:21.795 21:19:47 -- accel/accel.sh@17 -- # local accel_module 00:05:21.795 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:21.795 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:21.795 21:19:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:21.795 21:19:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:21.795 21:19:47 -- accel/accel.sh@12 -- # build_accel_config 00:05:21.795 21:19:47 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.795 21:19:47 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.795 21:19:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.795 21:19:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.795 21:19:47 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.795 21:19:47 -- accel/accel.sh@40 -- # local IFS=, 00:05:21.795 21:19:47 -- accel/accel.sh@41 -- # jq -r . 00:05:21.795 [2024-04-24 21:19:47.338777] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:21.795 [2024-04-24 21:19:47.338843] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496965 ] 00:05:21.795 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.795 [2024-04-24 21:19:47.400619] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.056 [2024-04-24 21:19:47.523243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.056 21:19:47 -- accel/accel.sh@20 -- # val= 00:05:22.056 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:22.056 21:19:47 -- accel/accel.sh@20 -- # val= 00:05:22.056 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:22.056 21:19:47 -- accel/accel.sh@20 -- # val= 00:05:22.056 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:22.056 21:19:47 -- accel/accel.sh@20 -- # val=0x1 00:05:22.056 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:22.056 21:19:47 -- accel/accel.sh@20 -- # val= 00:05:22.056 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:22.056 21:19:47 -- accel/accel.sh@20 -- # val= 00:05:22.056 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:22.056 21:19:47 -- accel/accel.sh@20 -- # val=decompress 00:05:22.056 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.056 21:19:47 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:22.056 21:19:47 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:22.056 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:22.056 21:19:47 -- accel/accel.sh@20 -- # val= 00:05:22.056 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:22.056 21:19:47 -- accel/accel.sh@20 -- # val=software 00:05:22.056 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.056 21:19:47 -- accel/accel.sh@22 -- # accel_module=software 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:22.056 21:19:47 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:22.056 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:22.056 21:19:47 -- accel/accel.sh@20 -- # val=32 00:05:22.056 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:22.056 21:19:47 -- accel/accel.sh@20 -- # val=32 00:05:22.056 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:22.056 21:19:47 -- accel/accel.sh@20 -- # val=2 00:05:22.056 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:22.056 21:19:47 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:22.056 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:22.056 21:19:47 -- accel/accel.sh@20 -- # val=Yes 00:05:22.056 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:22.056 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:22.056 21:19:47 -- accel/accel.sh@20 -- # val= 00:05:22.057 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.057 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:22.057 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:22.057 21:19:47 -- accel/accel.sh@20 -- # val= 00:05:22.057 21:19:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.057 21:19:47 -- accel/accel.sh@19 -- # IFS=: 00:05:22.057 21:19:47 -- accel/accel.sh@19 -- # read -r var val 00:05:23.429 21:19:48 -- accel/accel.sh@20 -- # val= 00:05:23.429 21:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.429 21:19:48 -- accel/accel.sh@19 -- # IFS=: 00:05:23.429 21:19:48 -- accel/accel.sh@19 -- # read -r var val 00:05:23.429 21:19:48 -- accel/accel.sh@20 -- # val= 00:05:23.429 21:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.429 21:19:48 -- accel/accel.sh@19 -- # IFS=: 00:05:23.429 21:19:48 -- accel/accel.sh@19 -- # read -r var val 00:05:23.429 21:19:48 -- accel/accel.sh@20 -- # val= 00:05:23.429 21:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.429 21:19:48 -- accel/accel.sh@19 -- # IFS=: 00:05:23.429 21:19:48 -- accel/accel.sh@19 -- # read -r var val 00:05:23.429 21:19:48 -- accel/accel.sh@20 -- # val= 00:05:23.429 21:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.429 21:19:48 -- accel/accel.sh@19 -- # IFS=: 00:05:23.429 21:19:48 -- accel/accel.sh@19 -- # read -r var val 00:05:23.429 21:19:48 -- accel/accel.sh@20 -- # val= 00:05:23.429 21:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.429 21:19:48 -- accel/accel.sh@19 -- # IFS=: 00:05:23.429 21:19:48 -- accel/accel.sh@19 -- # read -r var val 00:05:23.429 21:19:48 -- accel/accel.sh@20 -- # val= 00:05:23.429 21:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.429 21:19:48 -- accel/accel.sh@19 -- # IFS=: 00:05:23.429 21:19:48 -- accel/accel.sh@19 -- # read -r var val 00:05:23.429 21:19:48 -- accel/accel.sh@20 -- # val= 00:05:23.429 21:19:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.429 21:19:48 -- accel/accel.sh@19 -- # IFS=: 00:05:23.429 21:19:48 -- accel/accel.sh@19 -- # read -r var val 00:05:23.429 21:19:48 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.429 21:19:48 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:23.429 21:19:48 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.429 00:05:23.429 real 0m1.519s 00:05:23.429 user 0m1.370s 00:05:23.429 sys 0m0.151s 00:05:23.429 21:19:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:23.429 21:19:48 -- common/autotest_common.sh@10 -- # set +x 00:05:23.429 ************************************ 00:05:23.429 END TEST accel_deomp_full_mthread 00:05:23.429 ************************************ 00:05:23.429 21:19:48 -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:23.429 21:19:48 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:23.429 21:19:48 -- accel/accel.sh@137 -- # build_accel_config 00:05:23.429 21:19:48 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:23.429 21:19:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.429 21:19:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.429 21:19:48 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.429 21:19:48 -- common/autotest_common.sh@10 -- # set +x 00:05:23.429 21:19:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.429 21:19:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.429 21:19:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.429 21:19:48 -- accel/accel.sh@40 -- # local IFS=, 00:05:23.429 21:19:48 -- accel/accel.sh@41 -- # jq -r . 00:05:23.429 ************************************ 00:05:23.429 START TEST accel_dif_functional_tests 00:05:23.429 ************************************ 00:05:23.429 21:19:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:23.429 [2024-04-24 21:19:48.995526] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:23.429 [2024-04-24 21:19:48.995598] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497146 ] 00:05:23.429 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.429 [2024-04-24 21:19:49.065848] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.687 [2024-04-24 21:19:49.192373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.687 [2024-04-24 21:19:49.192429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.687 [2024-04-24 21:19:49.192433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.687 00:05:23.687 00:05:23.687 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.687 http://cunit.sourceforge.net/ 00:05:23.687 00:05:23.687 00:05:23.687 Suite: accel_dif 00:05:23.687 Test: verify: DIF generated, GUARD check ...passed 00:05:23.687 Test: verify: DIF generated, APPTAG check ...passed 00:05:23.687 Test: verify: DIF generated, REFTAG check ...passed 00:05:23.687 Test: verify: DIF not generated, GUARD check ...[2024-04-24 21:19:49.295275] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:23.687 [2024-04-24 21:19:49.295347] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:23.687 passed 00:05:23.687 Test: verify: DIF not generated, APPTAG check ...[2024-04-24 21:19:49.295389] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:23.687 [2024-04-24 21:19:49.295422] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:23.687 passed 00:05:23.687 Test: verify: DIF not generated, REFTAG check ...[2024-04-24 21:19:49.295457] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:23.687 [2024-04-24 21:19:49.295488] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:23.687 passed 00:05:23.687 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:23.687 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-24 21:19:49.295562] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:23.687 passed 00:05:23.687 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:23.687 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:23.687 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:23.687 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-24 21:19:49.295744] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:23.687 passed 00:05:23.688 Test: generate copy: DIF generated, GUARD check ...passed 00:05:23.688 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:23.688 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:23.688 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:23.688 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:23.688 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:23.688 Test: generate copy: iovecs-len validate ...[2024-04-24 21:19:49.296010] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:23.688 passed 00:05:23.688 Test: generate copy: buffer alignment validate ...passed 00:05:23.688 00:05:23.688 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.688 suites 1 1 n/a 0 0 00:05:23.688 tests 20 20 20 0 0 00:05:23.688 asserts 204 204 204 0 n/a 00:05:23.688 00:05:23.688 Elapsed time = 0.003 seconds 00:05:23.946 00:05:23.946 real 0m0.615s 00:05:23.946 user 0m0.884s 00:05:23.946 sys 0m0.192s 00:05:23.946 21:19:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:23.946 21:19:49 -- common/autotest_common.sh@10 -- # set +x 00:05:23.946 ************************************ 00:05:23.946 END TEST accel_dif_functional_tests 00:05:23.946 ************************************ 00:05:23.946 00:05:23.946 real 0m35.514s 00:05:23.946 user 0m37.665s 00:05:23.946 sys 0m5.589s 00:05:23.946 21:19:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:23.946 21:19:49 -- common/autotest_common.sh@10 -- # set +x 00:05:23.946 ************************************ 00:05:23.946 END TEST accel 00:05:23.946 ************************************ 00:05:23.946 21:19:49 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:23.946 21:19:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.946 21:19:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.946 21:19:49 -- common/autotest_common.sh@10 -- # set +x 00:05:24.205 ************************************ 00:05:24.205 START TEST accel_rpc 00:05:24.205 ************************************ 00:05:24.205 21:19:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:24.205 * Looking for test storage... 00:05:24.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:24.205 21:19:49 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:24.205 21:19:49 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2497331 00:05:24.205 21:19:49 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:24.205 21:19:49 -- accel/accel_rpc.sh@15 -- # waitforlisten 2497331 00:05:24.205 21:19:49 -- common/autotest_common.sh@817 -- # '[' -z 2497331 ']' 00:05:24.205 21:19:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.205 21:19:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:24.205 21:19:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.205 21:19:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:24.205 21:19:49 -- common/autotest_common.sh@10 -- # set +x 00:05:24.205 [2024-04-24 21:19:49.826776] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:24.205 [2024-04-24 21:19:49.826856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497331 ] 00:05:24.205 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.205 [2024-04-24 21:19:49.882875] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.463 [2024-04-24 21:19:49.990625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.463 21:19:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:24.463 21:19:50 -- common/autotest_common.sh@850 -- # return 0 00:05:24.463 21:19:50 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:24.463 21:19:50 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:24.463 21:19:50 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:24.463 21:19:50 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:24.463 21:19:50 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:24.463 21:19:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.463 21:19:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.463 21:19:50 -- common/autotest_common.sh@10 -- # set +x 00:05:24.463 ************************************ 00:05:24.463 START TEST accel_assign_opcode 00:05:24.463 ************************************ 00:05:24.463 21:19:50 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:05:24.463 21:19:50 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:24.463 21:19:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:24.464 21:19:50 -- common/autotest_common.sh@10 -- # set +x 00:05:24.464 [2024-04-24 21:19:50.131473] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:24.464 21:19:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:24.464 21:19:50 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:24.464 21:19:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:24.464 21:19:50 -- common/autotest_common.sh@10 -- # set +x 00:05:24.464 [2024-04-24 21:19:50.139488] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:24.722 21:19:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:24.722 21:19:50 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:24.722 21:19:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:24.722 21:19:50 -- common/autotest_common.sh@10 -- # set +x 00:05:24.722 21:19:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:24.722 21:19:50 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:24.722 21:19:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:24.722 21:19:50 -- common/autotest_common.sh@10 -- # set +x 00:05:24.722 21:19:50 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:24.722 21:19:50 -- accel/accel_rpc.sh@42 -- # grep software 00:05:24.980 21:19:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:24.980 software 00:05:24.980 00:05:24.980 real 0m0.307s 00:05:24.980 user 0m0.041s 00:05:24.980 sys 0m0.008s 00:05:24.980 21:19:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:24.980 21:19:50 -- common/autotest_common.sh@10 -- # set +x 00:05:24.980 ************************************ 00:05:24.980 END TEST accel_assign_opcode 00:05:24.980 ************************************ 00:05:24.980 21:19:50 -- accel/accel_rpc.sh@55 -- # killprocess 2497331 00:05:24.980 21:19:50 -- common/autotest_common.sh@936 -- # '[' -z 2497331 ']' 00:05:24.980 21:19:50 -- common/autotest_common.sh@940 -- # kill -0 2497331 00:05:24.980 21:19:50 -- common/autotest_common.sh@941 -- # uname 00:05:24.980 21:19:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:24.980 21:19:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2497331 00:05:24.980 21:19:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:24.980 21:19:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:24.980 21:19:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2497331' 00:05:24.980 killing process with pid 2497331 00:05:24.980 21:19:50 -- common/autotest_common.sh@955 -- # kill 2497331 00:05:24.980 21:19:50 -- common/autotest_common.sh@960 -- # wait 2497331 00:05:25.547 00:05:25.547 real 0m1.247s 00:05:25.547 user 0m1.225s 00:05:25.547 sys 0m0.446s 00:05:25.547 21:19:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:25.547 21:19:50 -- common/autotest_common.sh@10 -- # set +x 00:05:25.547 ************************************ 00:05:25.547 END TEST accel_rpc 00:05:25.547 ************************************ 00:05:25.547 21:19:50 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:25.547 21:19:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:25.547 21:19:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.547 21:19:50 -- common/autotest_common.sh@10 -- # set +x 00:05:25.547 ************************************ 00:05:25.547 START TEST app_cmdline 00:05:25.547 ************************************ 00:05:25.547 21:19:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:25.547 * Looking for test storage... 00:05:25.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:25.547 21:19:51 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:25.547 21:19:51 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2497556 00:05:25.547 21:19:51 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:25.547 21:19:51 -- app/cmdline.sh@18 -- # waitforlisten 2497556 00:05:25.547 21:19:51 -- common/autotest_common.sh@817 -- # '[' -z 2497556 ']' 00:05:25.547 21:19:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.547 21:19:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:25.547 21:19:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.547 21:19:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:25.547 21:19:51 -- common/autotest_common.sh@10 -- # set +x 00:05:25.547 [2024-04-24 21:19:51.197137] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:25.547 [2024-04-24 21:19:51.197241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497556 ] 00:05:25.547 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.805 [2024-04-24 21:19:51.259020] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.805 [2024-04-24 21:19:51.376882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.063 21:19:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:26.063 21:19:51 -- common/autotest_common.sh@850 -- # return 0 00:05:26.063 21:19:51 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:26.322 { 00:05:26.322 "version": "SPDK v24.05-pre git sha1 dd57ed3e8", 00:05:26.322 "fields": { 00:05:26.322 "major": 24, 00:05:26.322 "minor": 5, 00:05:26.322 "patch": 0, 00:05:26.322 "suffix": "-pre", 00:05:26.322 "commit": "dd57ed3e8" 00:05:26.322 } 00:05:26.322 } 00:05:26.322 21:19:51 -- app/cmdline.sh@22 -- # expected_methods=() 00:05:26.322 21:19:51 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:26.322 21:19:51 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:26.322 21:19:51 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:26.322 21:19:51 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:26.322 21:19:51 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:26.322 21:19:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.322 21:19:51 -- app/cmdline.sh@26 -- # sort 00:05:26.322 21:19:51 -- common/autotest_common.sh@10 -- # set +x 00:05:26.322 21:19:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.322 21:19:51 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:26.322 21:19:51 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:26.322 21:19:51 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:26.322 21:19:51 -- common/autotest_common.sh@638 -- # local es=0 00:05:26.322 21:19:51 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:26.322 21:19:51 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:26.322 21:19:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:26.322 21:19:51 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:26.322 21:19:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:26.322 21:19:51 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:26.322 21:19:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:26.322 21:19:51 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:26.322 21:19:51 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:26.322 21:19:51 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:26.580 request: 00:05:26.580 { 00:05:26.581 "method": "env_dpdk_get_mem_stats", 00:05:26.581 "req_id": 1 00:05:26.581 } 00:05:26.581 Got JSON-RPC error response 00:05:26.581 response: 00:05:26.581 { 00:05:26.581 "code": -32601, 00:05:26.581 "message": "Method not found" 00:05:26.581 } 00:05:26.581 21:19:52 -- common/autotest_common.sh@641 -- # es=1 00:05:26.581 21:19:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:26.581 21:19:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:26.581 21:19:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:26.581 21:19:52 -- app/cmdline.sh@1 -- # killprocess 2497556 00:05:26.581 21:19:52 -- common/autotest_common.sh@936 -- # '[' -z 2497556 ']' 00:05:26.581 21:19:52 -- common/autotest_common.sh@940 -- # kill -0 2497556 00:05:26.581 21:19:52 -- common/autotest_common.sh@941 -- # uname 00:05:26.581 21:19:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:26.581 21:19:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2497556 00:05:26.839 21:19:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:26.839 21:19:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:26.839 21:19:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2497556' 00:05:26.839 killing process with pid 2497556 00:05:26.839 21:19:52 -- common/autotest_common.sh@955 -- # kill 2497556 00:05:26.839 21:19:52 -- common/autotest_common.sh@960 -- # wait 2497556 00:05:27.098 00:05:27.098 real 0m1.645s 00:05:27.098 user 0m2.042s 00:05:27.098 sys 0m0.484s 00:05:27.098 21:19:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:27.098 21:19:52 -- common/autotest_common.sh@10 -- # set +x 00:05:27.098 ************************************ 00:05:27.098 END TEST app_cmdline 00:05:27.098 ************************************ 00:05:27.098 21:19:52 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:27.098 21:19:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.098 21:19:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.098 21:19:52 -- common/autotest_common.sh@10 -- # set +x 00:05:27.357 ************************************ 00:05:27.357 START TEST version 00:05:27.357 ************************************ 00:05:27.357 21:19:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:27.357 * Looking for test storage... 00:05:27.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:27.357 21:19:52 -- app/version.sh@17 -- # get_header_version major 00:05:27.357 21:19:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:27.357 21:19:52 -- app/version.sh@14 -- # cut -f2 00:05:27.357 21:19:52 -- app/version.sh@14 -- # tr -d '"' 00:05:27.357 21:19:52 -- app/version.sh@17 -- # major=24 00:05:27.357 21:19:52 -- app/version.sh@18 -- # get_header_version minor 00:05:27.357 21:19:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:27.357 21:19:52 -- app/version.sh@14 -- # cut -f2 00:05:27.357 21:19:52 -- app/version.sh@14 -- # tr -d '"' 00:05:27.357 21:19:52 -- app/version.sh@18 -- # minor=5 00:05:27.357 21:19:52 -- app/version.sh@19 -- # get_header_version patch 00:05:27.357 21:19:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:27.357 21:19:52 -- app/version.sh@14 -- # cut -f2 00:05:27.357 21:19:52 -- app/version.sh@14 -- # tr -d '"' 00:05:27.357 21:19:52 -- app/version.sh@19 -- # patch=0 00:05:27.357 21:19:52 -- app/version.sh@20 -- # get_header_version suffix 00:05:27.357 21:19:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:27.357 21:19:52 -- app/version.sh@14 -- # cut -f2 00:05:27.357 21:19:52 -- app/version.sh@14 -- # tr -d '"' 00:05:27.357 21:19:52 -- app/version.sh@20 -- # suffix=-pre 00:05:27.357 21:19:52 -- app/version.sh@22 -- # version=24.5 00:05:27.357 21:19:52 -- app/version.sh@25 -- # (( patch != 0 )) 00:05:27.357 21:19:52 -- app/version.sh@28 -- # version=24.5rc0 00:05:27.357 21:19:52 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:27.357 21:19:52 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:27.357 21:19:52 -- app/version.sh@30 -- # py_version=24.5rc0 00:05:27.357 21:19:52 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:05:27.357 00:05:27.357 real 0m0.111s 00:05:27.357 user 0m0.058s 00:05:27.357 sys 0m0.075s 00:05:27.357 21:19:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:27.357 21:19:52 -- common/autotest_common.sh@10 -- # set +x 00:05:27.357 ************************************ 00:05:27.357 END TEST version 00:05:27.357 ************************************ 00:05:27.357 21:19:53 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:05:27.357 21:19:53 -- spdk/autotest.sh@194 -- # uname -s 00:05:27.357 21:19:53 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:27.357 21:19:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:27.357 21:19:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:27.357 21:19:53 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:27.357 21:19:53 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:05:27.357 21:19:53 -- spdk/autotest.sh@258 -- # timing_exit lib 00:05:27.357 21:19:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:27.357 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:27.357 21:19:53 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:05:27.357 21:19:53 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:05:27.357 21:19:53 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:05:27.357 21:19:53 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:05:27.357 21:19:53 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:05:27.357 21:19:53 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:05:27.357 21:19:53 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:27.357 21:19:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:27.357 21:19:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.358 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:27.615 ************************************ 00:05:27.615 START TEST nvmf_tcp 00:05:27.615 ************************************ 00:05:27.615 21:19:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:27.615 * Looking for test storage... 00:05:27.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:27.615 21:19:53 -- nvmf/nvmf.sh@10 -- # uname -s 00:05:27.615 21:19:53 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:27.615 21:19:53 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:27.615 21:19:53 -- nvmf/common.sh@7 -- # uname -s 00:05:27.615 21:19:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.615 21:19:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.615 21:19:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.615 21:19:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.615 21:19:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.615 21:19:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.615 21:19:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.615 21:19:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.615 21:19:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.615 21:19:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.615 21:19:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:27.615 21:19:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:27.615 21:19:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.615 21:19:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.615 21:19:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:27.615 21:19:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.615 21:19:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:27.615 21:19:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.615 21:19:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.615 21:19:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.615 21:19:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.615 21:19:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.616 21:19:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.616 21:19:53 -- paths/export.sh@5 -- # export PATH 00:05:27.616 21:19:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.616 21:19:53 -- nvmf/common.sh@47 -- # : 0 00:05:27.616 21:19:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:27.616 21:19:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:27.616 21:19:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.616 21:19:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.616 21:19:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.616 21:19:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:27.616 21:19:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:27.616 21:19:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:27.616 21:19:53 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:27.616 21:19:53 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:27.616 21:19:53 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:27.616 21:19:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:27.616 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:27.616 21:19:53 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:27.616 21:19:53 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:27.616 21:19:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:27.616 21:19:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.616 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:27.873 ************************************ 00:05:27.873 START TEST nvmf_example 00:05:27.873 ************************************ 00:05:27.873 21:19:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:27.873 * Looking for test storage... 00:05:27.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:27.873 21:19:53 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:27.873 21:19:53 -- nvmf/common.sh@7 -- # uname -s 00:05:27.873 21:19:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.873 21:19:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.873 21:19:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.873 21:19:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.873 21:19:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.873 21:19:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.873 21:19:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.873 21:19:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.873 21:19:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.873 21:19:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.873 21:19:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:27.873 21:19:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:27.873 21:19:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.873 21:19:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.873 21:19:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:27.873 21:19:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.873 21:19:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:27.874 21:19:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.874 21:19:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.874 21:19:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.874 21:19:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.874 21:19:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.874 21:19:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.874 21:19:53 -- paths/export.sh@5 -- # export PATH 00:05:27.874 21:19:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.874 21:19:53 -- nvmf/common.sh@47 -- # : 0 00:05:27.874 21:19:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:27.874 21:19:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:27.874 21:19:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.874 21:19:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.874 21:19:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.874 21:19:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:27.874 21:19:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:27.874 21:19:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:27.874 21:19:53 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:27.874 21:19:53 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:27.874 21:19:53 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:27.874 21:19:53 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:27.874 21:19:53 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:27.874 21:19:53 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:27.874 21:19:53 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:27.874 21:19:53 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:27.874 21:19:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:27.874 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:27.874 21:19:53 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:27.874 21:19:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:05:27.874 21:19:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:27.874 21:19:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:05:27.874 21:19:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:05:27.874 21:19:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:05:27.874 21:19:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:27.874 21:19:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:27.874 21:19:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:27.874 21:19:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:05:27.874 21:19:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:05:27.874 21:19:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:05:27.874 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:29.777 21:19:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:29.777 21:19:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:05:29.777 21:19:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:29.777 21:19:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:29.777 21:19:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:29.777 21:19:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:29.777 21:19:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:29.777 21:19:55 -- nvmf/common.sh@295 -- # net_devs=() 00:05:29.777 21:19:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:29.778 21:19:55 -- nvmf/common.sh@296 -- # e810=() 00:05:29.778 21:19:55 -- nvmf/common.sh@296 -- # local -ga e810 00:05:29.778 21:19:55 -- nvmf/common.sh@297 -- # x722=() 00:05:29.778 21:19:55 -- nvmf/common.sh@297 -- # local -ga x722 00:05:29.778 21:19:55 -- nvmf/common.sh@298 -- # mlx=() 00:05:29.778 21:19:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:05:29.778 21:19:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:29.778 21:19:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:29.778 21:19:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:29.778 21:19:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:29.778 21:19:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:29.778 21:19:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:29.778 21:19:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:29.778 21:19:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:29.778 21:19:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:29.778 21:19:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:29.778 21:19:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:29.778 21:19:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:29.778 21:19:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:29.778 21:19:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:29.778 21:19:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:29.778 21:19:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:29.778 21:19:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:29.778 21:19:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:29.778 21:19:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:29.778 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:29.778 21:19:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:29.778 21:19:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:29.778 21:19:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:29.778 21:19:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:29.778 21:19:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:29.778 21:19:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:29.778 21:19:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:29.778 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:29.778 21:19:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:29.778 21:19:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:29.778 21:19:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:29.778 21:19:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:29.778 21:19:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:29.778 21:19:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:29.778 21:19:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:29.778 21:19:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:29.778 21:19:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:29.778 21:19:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:29.778 21:19:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:29.778 21:19:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:29.778 21:19:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:29.778 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:29.778 21:19:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:29.778 21:19:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:29.778 21:19:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:29.778 21:19:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:29.778 21:19:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:29.778 21:19:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:29.778 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:29.778 21:19:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:29.778 21:19:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:05:29.778 21:19:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:05:29.778 21:19:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:05:29.778 21:19:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:05:29.778 21:19:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:05:29.778 21:19:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:29.778 21:19:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:29.778 21:19:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:29.778 21:19:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:29.778 21:19:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:29.778 21:19:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:29.778 21:19:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:29.778 21:19:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:29.778 21:19:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:29.778 21:19:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:29.778 21:19:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:29.778 21:19:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:29.778 21:19:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:29.778 21:19:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:29.778 21:19:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:29.778 21:19:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:29.778 21:19:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:29.778 21:19:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:29.778 21:19:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:29.778 21:19:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:29.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:29.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:05:29.778 00:05:29.778 --- 10.0.0.2 ping statistics --- 00:05:29.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:29.778 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:05:29.778 21:19:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:30.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:30.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:05:30.037 00:05:30.037 --- 10.0.0.1 ping statistics --- 00:05:30.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:30.037 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:05:30.037 21:19:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:30.037 21:19:55 -- nvmf/common.sh@411 -- # return 0 00:05:30.037 21:19:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:05:30.037 21:19:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:30.037 21:19:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:05:30.037 21:19:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:05:30.037 21:19:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:30.037 21:19:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:05:30.037 21:19:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:05:30.037 21:19:55 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:30.037 21:19:55 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:30.037 21:19:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:30.037 21:19:55 -- common/autotest_common.sh@10 -- # set +x 00:05:30.037 21:19:55 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:30.037 21:19:55 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:30.037 21:19:55 -- target/nvmf_example.sh@34 -- # nvmfpid=2499597 00:05:30.037 21:19:55 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:30.037 21:19:55 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:30.037 21:19:55 -- target/nvmf_example.sh@36 -- # waitforlisten 2499597 00:05:30.037 21:19:55 -- common/autotest_common.sh@817 -- # '[' -z 2499597 ']' 00:05:30.037 21:19:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.037 21:19:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:30.037 21:19:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.037 21:19:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:30.037 21:19:55 -- common/autotest_common.sh@10 -- # set +x 00:05:30.037 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.971 21:19:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:30.971 21:19:56 -- common/autotest_common.sh@850 -- # return 0 00:05:30.971 21:19:56 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:30.971 21:19:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:30.971 21:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:30.971 21:19:56 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:30.971 21:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:30.971 21:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:30.971 21:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:30.971 21:19:56 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:30.971 21:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:30.971 21:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:30.971 21:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:30.971 21:19:56 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:30.971 21:19:56 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:30.971 21:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:30.971 21:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:30.971 21:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:30.971 21:19:56 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:30.971 21:19:56 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:30.971 21:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:30.971 21:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:30.971 21:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:30.971 21:19:56 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:30.971 21:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:30.971 21:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:30.971 21:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:30.971 21:19:56 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:30.971 21:19:56 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:30.971 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.169 Initializing NVMe Controllers 00:05:43.169 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:43.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:43.169 Initialization complete. Launching workers. 00:05:43.169 ======================================================== 00:05:43.169 Latency(us) 00:05:43.169 Device Information : IOPS MiB/s Average min max 00:05:43.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14956.39 58.42 4278.67 691.13 18254.68 00:05:43.169 ======================================================== 00:05:43.169 Total : 14956.39 58.42 4278.67 691.13 18254.68 00:05:43.169 00:05:43.169 21:20:06 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:05:43.169 21:20:06 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:05:43.169 21:20:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:05:43.169 21:20:06 -- nvmf/common.sh@117 -- # sync 00:05:43.169 21:20:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:43.169 21:20:06 -- nvmf/common.sh@120 -- # set +e 00:05:43.169 21:20:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:43.169 21:20:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:43.169 rmmod nvme_tcp 00:05:43.169 rmmod nvme_fabrics 00:05:43.169 rmmod nvme_keyring 00:05:43.169 21:20:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:43.169 21:20:06 -- nvmf/common.sh@124 -- # set -e 00:05:43.169 21:20:06 -- nvmf/common.sh@125 -- # return 0 00:05:43.170 21:20:06 -- nvmf/common.sh@478 -- # '[' -n 2499597 ']' 00:05:43.170 21:20:06 -- nvmf/common.sh@479 -- # killprocess 2499597 00:05:43.170 21:20:06 -- common/autotest_common.sh@936 -- # '[' -z 2499597 ']' 00:05:43.170 21:20:06 -- common/autotest_common.sh@940 -- # kill -0 2499597 00:05:43.170 21:20:06 -- common/autotest_common.sh@941 -- # uname 00:05:43.170 21:20:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:43.170 21:20:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2499597 00:05:43.170 21:20:06 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:05:43.170 21:20:06 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:05:43.170 21:20:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2499597' 00:05:43.170 killing process with pid 2499597 00:05:43.170 21:20:06 -- common/autotest_common.sh@955 -- # kill 2499597 00:05:43.170 21:20:06 -- common/autotest_common.sh@960 -- # wait 2499597 00:05:43.170 nvmf threads initialize successfully 00:05:43.170 bdev subsystem init successfully 00:05:43.170 created a nvmf target service 00:05:43.170 create targets's poll groups done 00:05:43.170 all subsystems of target started 00:05:43.170 nvmf target is running 00:05:43.170 all subsystems of target stopped 00:05:43.170 destroy targets's poll groups done 00:05:43.170 destroyed the nvmf target service 00:05:43.170 bdev subsystem finish successfully 00:05:43.170 nvmf threads destroy successfully 00:05:43.170 21:20:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:05:43.170 21:20:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:05:43.170 21:20:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:05:43.170 21:20:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:43.170 21:20:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:43.170 21:20:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:43.170 21:20:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:43.170 21:20:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:43.739 21:20:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:43.739 21:20:09 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:05:43.739 21:20:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:43.739 21:20:09 -- common/autotest_common.sh@10 -- # set +x 00:05:43.739 00:05:43.739 real 0m15.969s 00:05:43.739 user 0m45.541s 00:05:43.740 sys 0m3.223s 00:05:43.740 21:20:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:43.740 21:20:09 -- common/autotest_common.sh@10 -- # set +x 00:05:43.740 ************************************ 00:05:43.740 END TEST nvmf_example 00:05:43.740 ************************************ 00:05:43.740 21:20:09 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:43.740 21:20:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:43.740 21:20:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.740 21:20:09 -- common/autotest_common.sh@10 -- # set +x 00:05:43.740 ************************************ 00:05:43.740 START TEST nvmf_filesystem 00:05:43.740 ************************************ 00:05:43.740 21:20:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:44.002 * Looking for test storage... 00:05:44.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:44.002 21:20:09 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:05:44.002 21:20:09 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:05:44.002 21:20:09 -- common/autotest_common.sh@34 -- # set -e 00:05:44.002 21:20:09 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:05:44.002 21:20:09 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:05:44.002 21:20:09 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:05:44.002 21:20:09 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:05:44.002 21:20:09 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:05:44.002 21:20:09 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:44.002 21:20:09 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:44.002 21:20:09 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:44.002 21:20:09 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:44.002 21:20:09 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:05:44.002 21:20:09 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:44.002 21:20:09 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:44.002 21:20:09 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:44.002 21:20:09 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:44.002 21:20:09 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:44.002 21:20:09 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:44.002 21:20:09 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:44.002 21:20:09 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:44.002 21:20:09 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:44.002 21:20:09 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:44.002 21:20:09 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:44.002 21:20:09 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:44.002 21:20:09 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:44.002 21:20:09 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:44.002 21:20:09 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:44.002 21:20:09 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:44.002 21:20:09 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:44.002 21:20:09 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:44.002 21:20:09 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:44.002 21:20:09 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:44.002 21:20:09 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:44.002 21:20:09 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:44.002 21:20:09 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:44.002 21:20:09 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:44.002 21:20:09 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:44.002 21:20:09 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:44.002 21:20:09 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:44.002 21:20:09 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:44.002 21:20:09 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:44.002 21:20:09 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:44.002 21:20:09 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:44.002 21:20:09 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:44.002 21:20:09 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:44.002 21:20:09 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:44.002 21:20:09 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:44.002 21:20:09 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:44.002 21:20:09 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:44.002 21:20:09 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:44.002 21:20:09 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:44.002 21:20:09 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:44.002 21:20:09 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:05:44.002 21:20:09 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:05:44.002 21:20:09 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:44.002 21:20:09 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:05:44.002 21:20:09 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:05:44.002 21:20:09 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:05:44.002 21:20:09 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:05:44.002 21:20:09 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:05:44.002 21:20:09 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:05:44.002 21:20:09 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:05:44.002 21:20:09 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:05:44.002 21:20:09 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:05:44.002 21:20:09 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:05:44.002 21:20:09 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:05:44.002 21:20:09 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:05:44.002 21:20:09 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:05:44.002 21:20:09 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:05:44.002 21:20:09 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:05:44.002 21:20:09 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:05:44.002 21:20:09 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:05:44.002 21:20:09 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:05:44.002 21:20:09 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:05:44.002 21:20:09 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:44.002 21:20:09 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:05:44.002 21:20:09 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:05:44.002 21:20:09 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:05:44.002 21:20:09 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:05:44.002 21:20:09 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:05:44.002 21:20:09 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:05:44.002 21:20:09 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:05:44.002 21:20:09 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:05:44.002 21:20:09 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:05:44.002 21:20:09 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:05:44.002 21:20:09 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:05:44.002 21:20:09 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:44.002 21:20:09 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:05:44.002 21:20:09 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:05:44.002 21:20:09 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:44.002 21:20:09 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:44.002 21:20:09 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:44.002 21:20:09 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:44.002 21:20:09 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:44.002 21:20:09 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:44.002 21:20:09 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:44.002 21:20:09 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:44.002 21:20:09 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:44.002 21:20:09 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:44.002 21:20:09 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:44.002 21:20:09 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:05:44.002 21:20:09 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:05:44.002 21:20:09 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:05:44.002 21:20:09 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:05:44.002 21:20:09 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:05:44.002 #define SPDK_CONFIG_H 00:05:44.002 #define SPDK_CONFIG_APPS 1 00:05:44.002 #define SPDK_CONFIG_ARCH native 00:05:44.002 #undef SPDK_CONFIG_ASAN 00:05:44.002 #undef SPDK_CONFIG_AVAHI 00:05:44.002 #undef SPDK_CONFIG_CET 00:05:44.002 #define SPDK_CONFIG_COVERAGE 1 00:05:44.002 #define SPDK_CONFIG_CROSS_PREFIX 00:05:44.002 #undef SPDK_CONFIG_CRYPTO 00:05:44.002 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:44.002 #undef SPDK_CONFIG_CUSTOMOCF 00:05:44.002 #undef SPDK_CONFIG_DAOS 00:05:44.003 #define SPDK_CONFIG_DAOS_DIR 00:05:44.003 #define SPDK_CONFIG_DEBUG 1 00:05:44.003 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:44.003 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:44.003 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:44.003 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:44.003 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:44.003 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:44.003 #define SPDK_CONFIG_EXAMPLES 1 00:05:44.003 #undef SPDK_CONFIG_FC 00:05:44.003 #define SPDK_CONFIG_FC_PATH 00:05:44.003 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:44.003 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:44.003 #undef SPDK_CONFIG_FUSE 00:05:44.003 #undef SPDK_CONFIG_FUZZER 00:05:44.003 #define SPDK_CONFIG_FUZZER_LIB 00:05:44.003 #undef SPDK_CONFIG_GOLANG 00:05:44.003 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:44.003 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:44.003 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:44.003 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:05:44.003 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:44.003 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:44.003 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:44.003 #define SPDK_CONFIG_IDXD 1 00:05:44.003 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:44.003 #undef SPDK_CONFIG_IPSEC_MB 00:05:44.003 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:44.003 #define SPDK_CONFIG_ISAL 1 00:05:44.003 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:44.003 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:44.003 #define SPDK_CONFIG_LIBDIR 00:05:44.003 #undef SPDK_CONFIG_LTO 00:05:44.003 #define SPDK_CONFIG_MAX_LCORES 00:05:44.003 #define SPDK_CONFIG_NVME_CUSE 1 00:05:44.003 #undef SPDK_CONFIG_OCF 00:05:44.003 #define SPDK_CONFIG_OCF_PATH 00:05:44.003 #define SPDK_CONFIG_OPENSSL_PATH 00:05:44.003 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:44.003 #define SPDK_CONFIG_PGO_DIR 00:05:44.003 #undef SPDK_CONFIG_PGO_USE 00:05:44.003 #define SPDK_CONFIG_PREFIX /usr/local 00:05:44.003 #undef SPDK_CONFIG_RAID5F 00:05:44.003 #undef SPDK_CONFIG_RBD 00:05:44.003 #define SPDK_CONFIG_RDMA 1 00:05:44.003 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:44.003 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:44.003 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:44.003 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:44.003 #define SPDK_CONFIG_SHARED 1 00:05:44.003 #undef SPDK_CONFIG_SMA 00:05:44.003 #define SPDK_CONFIG_TESTS 1 00:05:44.003 #undef SPDK_CONFIG_TSAN 00:05:44.003 #define SPDK_CONFIG_UBLK 1 00:05:44.003 #define SPDK_CONFIG_UBSAN 1 00:05:44.003 #undef SPDK_CONFIG_UNIT_TESTS 00:05:44.003 #undef SPDK_CONFIG_URING 00:05:44.003 #define SPDK_CONFIG_URING_PATH 00:05:44.003 #undef SPDK_CONFIG_URING_ZNS 00:05:44.003 #undef SPDK_CONFIG_USDT 00:05:44.003 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:44.003 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:44.003 #define SPDK_CONFIG_VFIO_USER 1 00:05:44.003 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:44.003 #define SPDK_CONFIG_VHOST 1 00:05:44.003 #define SPDK_CONFIG_VIRTIO 1 00:05:44.003 #undef SPDK_CONFIG_VTUNE 00:05:44.003 #define SPDK_CONFIG_VTUNE_DIR 00:05:44.003 #define SPDK_CONFIG_WERROR 1 00:05:44.003 #define SPDK_CONFIG_WPDK_DIR 00:05:44.003 #undef SPDK_CONFIG_XNVME 00:05:44.003 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:44.003 21:20:09 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:44.003 21:20:09 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:44.003 21:20:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.003 21:20:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.003 21:20:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.003 21:20:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.003 21:20:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.003 21:20:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.003 21:20:09 -- paths/export.sh@5 -- # export PATH 00:05:44.003 21:20:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.003 21:20:09 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:44.003 21:20:09 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:44.003 21:20:09 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:44.003 21:20:09 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:44.003 21:20:09 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:05:44.003 21:20:09 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:44.003 21:20:09 -- pm/common@67 -- # TEST_TAG=N/A 00:05:44.003 21:20:09 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:05:44.003 21:20:09 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:05:44.003 21:20:09 -- pm/common@71 -- # uname -s 00:05:44.003 21:20:09 -- pm/common@71 -- # PM_OS=Linux 00:05:44.003 21:20:09 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:44.003 21:20:09 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:05:44.003 21:20:09 -- pm/common@76 -- # [[ Linux == Linux ]] 00:05:44.003 21:20:09 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:05:44.003 21:20:09 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:05:44.003 21:20:09 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:05:44.003 21:20:09 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:05:44.003 21:20:09 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:05:44.003 21:20:09 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:05:44.003 21:20:09 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:05:44.003 21:20:09 -- common/autotest_common.sh@57 -- # : 0 00:05:44.003 21:20:09 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:05:44.003 21:20:09 -- common/autotest_common.sh@61 -- # : 0 00:05:44.003 21:20:09 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:05:44.003 21:20:09 -- common/autotest_common.sh@63 -- # : 0 00:05:44.003 21:20:09 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:05:44.003 21:20:09 -- common/autotest_common.sh@65 -- # : 1 00:05:44.003 21:20:09 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:05:44.003 21:20:09 -- common/autotest_common.sh@67 -- # : 0 00:05:44.003 21:20:09 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:05:44.003 21:20:09 -- common/autotest_common.sh@69 -- # : 00:05:44.003 21:20:09 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:05:44.003 21:20:09 -- common/autotest_common.sh@71 -- # : 0 00:05:44.003 21:20:09 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:05:44.003 21:20:09 -- common/autotest_common.sh@73 -- # : 0 00:05:44.003 21:20:09 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:05:44.003 21:20:09 -- common/autotest_common.sh@75 -- # : 0 00:05:44.003 21:20:09 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:05:44.003 21:20:09 -- common/autotest_common.sh@77 -- # : 0 00:05:44.003 21:20:09 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:05:44.003 21:20:09 -- common/autotest_common.sh@79 -- # : 0 00:05:44.003 21:20:09 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:05:44.003 21:20:09 -- common/autotest_common.sh@81 -- # : 0 00:05:44.003 21:20:09 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:05:44.003 21:20:09 -- common/autotest_common.sh@83 -- # : 0 00:05:44.003 21:20:09 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:05:44.003 21:20:09 -- common/autotest_common.sh@85 -- # : 1 00:05:44.003 21:20:09 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:05:44.003 21:20:09 -- common/autotest_common.sh@87 -- # : 0 00:05:44.003 21:20:09 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:05:44.003 21:20:09 -- common/autotest_common.sh@89 -- # : 0 00:05:44.003 21:20:09 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:05:44.003 21:20:09 -- common/autotest_common.sh@91 -- # : 1 00:05:44.003 21:20:09 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:05:44.003 21:20:09 -- common/autotest_common.sh@93 -- # : 1 00:05:44.003 21:20:09 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:05:44.004 21:20:09 -- common/autotest_common.sh@95 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:05:44.004 21:20:09 -- common/autotest_common.sh@97 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:05:44.004 21:20:09 -- common/autotest_common.sh@99 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:05:44.004 21:20:09 -- common/autotest_common.sh@101 -- # : tcp 00:05:44.004 21:20:09 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:05:44.004 21:20:09 -- common/autotest_common.sh@103 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:05:44.004 21:20:09 -- common/autotest_common.sh@105 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:05:44.004 21:20:09 -- common/autotest_common.sh@107 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:05:44.004 21:20:09 -- common/autotest_common.sh@109 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:05:44.004 21:20:09 -- common/autotest_common.sh@111 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:05:44.004 21:20:09 -- common/autotest_common.sh@113 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:05:44.004 21:20:09 -- common/autotest_common.sh@115 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:05:44.004 21:20:09 -- common/autotest_common.sh@117 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:05:44.004 21:20:09 -- common/autotest_common.sh@119 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:05:44.004 21:20:09 -- common/autotest_common.sh@121 -- # : 1 00:05:44.004 21:20:09 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:05:44.004 21:20:09 -- common/autotest_common.sh@123 -- # : 00:05:44.004 21:20:09 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:05:44.004 21:20:09 -- common/autotest_common.sh@125 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:05:44.004 21:20:09 -- common/autotest_common.sh@127 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:05:44.004 21:20:09 -- common/autotest_common.sh@129 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:05:44.004 21:20:09 -- common/autotest_common.sh@131 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:05:44.004 21:20:09 -- common/autotest_common.sh@133 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:05:44.004 21:20:09 -- common/autotest_common.sh@135 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:05:44.004 21:20:09 -- common/autotest_common.sh@137 -- # : 00:05:44.004 21:20:09 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:05:44.004 21:20:09 -- common/autotest_common.sh@139 -- # : true 00:05:44.004 21:20:09 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:05:44.004 21:20:09 -- common/autotest_common.sh@141 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:05:44.004 21:20:09 -- common/autotest_common.sh@143 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:05:44.004 21:20:09 -- common/autotest_common.sh@145 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:05:44.004 21:20:09 -- common/autotest_common.sh@147 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:05:44.004 21:20:09 -- common/autotest_common.sh@149 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:05:44.004 21:20:09 -- common/autotest_common.sh@151 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:05:44.004 21:20:09 -- common/autotest_common.sh@153 -- # : e810 00:05:44.004 21:20:09 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:05:44.004 21:20:09 -- common/autotest_common.sh@155 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:05:44.004 21:20:09 -- common/autotest_common.sh@157 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:05:44.004 21:20:09 -- common/autotest_common.sh@159 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:05:44.004 21:20:09 -- common/autotest_common.sh@161 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:05:44.004 21:20:09 -- common/autotest_common.sh@163 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:05:44.004 21:20:09 -- common/autotest_common.sh@166 -- # : 00:05:44.004 21:20:09 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:05:44.004 21:20:09 -- common/autotest_common.sh@168 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:05:44.004 21:20:09 -- common/autotest_common.sh@170 -- # : 0 00:05:44.004 21:20:09 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:05:44.004 21:20:09 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:44.004 21:20:09 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:44.004 21:20:09 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:44.004 21:20:09 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:44.004 21:20:09 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:44.004 21:20:09 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:44.004 21:20:09 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:44.004 21:20:09 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:44.004 21:20:09 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:44.004 21:20:09 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:05:44.004 21:20:09 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:44.004 21:20:09 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:44.004 21:20:09 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:05:44.004 21:20:09 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:05:44.004 21:20:09 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:44.004 21:20:09 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:44.004 21:20:09 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:44.004 21:20:09 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:44.004 21:20:09 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:05:44.004 21:20:09 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:05:44.004 21:20:09 -- common/autotest_common.sh@199 -- # cat 00:05:44.004 21:20:09 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:05:44.004 21:20:09 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:44.004 21:20:09 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:44.004 21:20:09 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:44.004 21:20:09 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:44.004 21:20:09 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:05:44.004 21:20:09 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:05:44.004 21:20:09 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:44.004 21:20:09 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:44.004 21:20:09 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:44.004 21:20:09 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:44.004 21:20:09 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:44.004 21:20:09 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:44.004 21:20:09 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:44.004 21:20:09 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:44.004 21:20:09 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:44.005 21:20:09 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:44.005 21:20:09 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:44.005 21:20:09 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:44.005 21:20:09 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:05:44.005 21:20:09 -- common/autotest_common.sh@252 -- # export valgrind= 00:05:44.005 21:20:09 -- common/autotest_common.sh@252 -- # valgrind= 00:05:44.005 21:20:09 -- common/autotest_common.sh@258 -- # uname -s 00:05:44.005 21:20:09 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:05:44.005 21:20:09 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:05:44.005 21:20:09 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:05:44.005 21:20:09 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:05:44.005 21:20:09 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:05:44.005 21:20:09 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:05:44.005 21:20:09 -- common/autotest_common.sh@268 -- # MAKE=make 00:05:44.005 21:20:09 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j48 00:05:44.005 21:20:09 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:05:44.005 21:20:09 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:05:44.005 21:20:09 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:05:44.005 21:20:09 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:05:44.005 21:20:09 -- common/autotest_common.sh@289 -- # for i in "$@" 00:05:44.005 21:20:09 -- common/autotest_common.sh@290 -- # case "$i" in 00:05:44.005 21:20:09 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:05:44.005 21:20:09 -- common/autotest_common.sh@307 -- # [[ -z 2501358 ]] 00:05:44.005 21:20:09 -- common/autotest_common.sh@307 -- # kill -0 2501358 00:05:44.005 21:20:09 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:05:44.005 21:20:09 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:05:44.005 21:20:09 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:05:44.005 21:20:09 -- common/autotest_common.sh@320 -- # local mount target_dir 00:05:44.005 21:20:09 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:05:44.005 21:20:09 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:05:44.005 21:20:09 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:05:44.005 21:20:09 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:05:44.005 21:20:09 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.cGsLWu 00:05:44.005 21:20:09 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:44.005 21:20:09 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:05:44.005 21:20:09 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:05:44.005 21:20:09 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.cGsLWu/tests/target /tmp/spdk.cGsLWu 00:05:44.005 21:20:09 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:05:44.005 21:20:09 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:44.005 21:20:09 -- common/autotest_common.sh@316 -- # df -T 00:05:44.005 21:20:09 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:05:44.005 21:20:09 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:05:44.005 21:20:09 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:05:44.005 21:20:09 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:05:44.005 21:20:09 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:05:44.005 21:20:09 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:05:44.005 21:20:09 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:44.005 21:20:09 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:05:44.005 21:20:09 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:05:44.005 21:20:09 -- common/autotest_common.sh@351 -- # avails["$mount"]=1052192768 00:05:44.005 21:20:09 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:05:44.005 21:20:09 -- common/autotest_common.sh@352 -- # uses["$mount"]=4232237056 00:05:44.005 21:20:09 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:44.005 21:20:09 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:05:44.005 21:20:09 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:05:44.005 21:20:09 -- common/autotest_common.sh@351 -- # avails["$mount"]=48074395648 00:05:44.005 21:20:09 -- common/autotest_common.sh@351 -- # sizes["$mount"]=61994708992 00:05:44.005 21:20:09 -- common/autotest_common.sh@352 -- # uses["$mount"]=13920313344 00:05:44.005 21:20:09 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:44.005 21:20:09 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:05:44.005 21:20:09 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:05:44.005 21:20:09 -- common/autotest_common.sh@351 -- # avails["$mount"]=30994739200 00:05:44.005 21:20:09 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997352448 00:05:44.005 21:20:09 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:05:44.005 21:20:09 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:44.005 21:20:09 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:05:44.005 21:20:09 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:05:44.005 21:20:09 -- common/autotest_common.sh@351 -- # avails["$mount"]=12390178816 00:05:44.005 21:20:09 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12398944256 00:05:44.005 21:20:09 -- common/autotest_common.sh@352 -- # uses["$mount"]=8765440 00:05:44.005 21:20:09 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:44.005 21:20:09 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:05:44.005 21:20:09 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:05:44.005 21:20:09 -- common/autotest_common.sh@351 -- # avails["$mount"]=30996553728 00:05:44.005 21:20:09 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997356544 00:05:44.005 21:20:09 -- common/autotest_common.sh@352 -- # uses["$mount"]=802816 00:05:44.005 21:20:09 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:44.005 21:20:09 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:05:44.005 21:20:09 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:05:44.005 21:20:09 -- common/autotest_common.sh@351 -- # avails["$mount"]=6199463936 00:05:44.005 21:20:09 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6199468032 00:05:44.005 21:20:09 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:05:44.005 21:20:09 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:44.005 21:20:09 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:05:44.005 * Looking for test storage... 00:05:44.005 21:20:09 -- common/autotest_common.sh@357 -- # local target_space new_size 00:05:44.005 21:20:09 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:05:44.005 21:20:09 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:44.005 21:20:09 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:05:44.005 21:20:09 -- common/autotest_common.sh@361 -- # mount=/ 00:05:44.005 21:20:09 -- common/autotest_common.sh@363 -- # target_space=48074395648 00:05:44.005 21:20:09 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:05:44.005 21:20:09 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:05:44.005 21:20:09 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:05:44.005 21:20:09 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:05:44.005 21:20:09 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:05:44.005 21:20:09 -- common/autotest_common.sh@370 -- # new_size=16134905856 00:05:44.005 21:20:09 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:05:44.005 21:20:09 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:44.005 21:20:09 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:44.005 21:20:09 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:44.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:44.005 21:20:09 -- common/autotest_common.sh@378 -- # return 0 00:05:44.005 21:20:09 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:05:44.005 21:20:09 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:05:44.005 21:20:09 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:05:44.005 21:20:09 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:44.005 21:20:09 -- common/autotest_common.sh@1673 -- # true 00:05:44.005 21:20:09 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:05:44.005 21:20:09 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:05:44.005 21:20:09 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:05:44.005 21:20:09 -- common/autotest_common.sh@27 -- # exec 00:05:44.005 21:20:09 -- common/autotest_common.sh@29 -- # exec 00:05:44.005 21:20:09 -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:44.005 21:20:09 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:44.005 21:20:09 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:44.005 21:20:09 -- common/autotest_common.sh@18 -- # set -x 00:05:44.005 21:20:09 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:44.005 21:20:09 -- nvmf/common.sh@7 -- # uname -s 00:05:44.005 21:20:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:44.005 21:20:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:44.005 21:20:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:44.005 21:20:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:44.005 21:20:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:44.005 21:20:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:44.005 21:20:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:44.005 21:20:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:44.005 21:20:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:44.005 21:20:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:44.006 21:20:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:44.006 21:20:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:44.006 21:20:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:44.006 21:20:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:44.006 21:20:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:44.006 21:20:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:44.006 21:20:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:44.006 21:20:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.006 21:20:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.006 21:20:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.006 21:20:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.006 21:20:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.006 21:20:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.006 21:20:09 -- paths/export.sh@5 -- # export PATH 00:05:44.006 21:20:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.006 21:20:09 -- nvmf/common.sh@47 -- # : 0 00:05:44.006 21:20:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:44.006 21:20:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:44.006 21:20:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:44.006 21:20:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:44.006 21:20:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:44.006 21:20:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:44.006 21:20:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:44.006 21:20:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:44.006 21:20:09 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:05:44.006 21:20:09 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:05:44.006 21:20:09 -- target/filesystem.sh@15 -- # nvmftestinit 00:05:44.006 21:20:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:05:44.006 21:20:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:44.006 21:20:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:05:44.006 21:20:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:05:44.006 21:20:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:05:44.006 21:20:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:44.006 21:20:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:44.006 21:20:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:44.006 21:20:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:05:44.006 21:20:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:05:44.006 21:20:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:05:44.006 21:20:09 -- common/autotest_common.sh@10 -- # set +x 00:05:46.541 21:20:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:46.541 21:20:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:05:46.541 21:20:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:46.541 21:20:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:46.541 21:20:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:46.541 21:20:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:46.541 21:20:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:46.541 21:20:11 -- nvmf/common.sh@295 -- # net_devs=() 00:05:46.541 21:20:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:46.541 21:20:11 -- nvmf/common.sh@296 -- # e810=() 00:05:46.541 21:20:11 -- nvmf/common.sh@296 -- # local -ga e810 00:05:46.541 21:20:11 -- nvmf/common.sh@297 -- # x722=() 00:05:46.541 21:20:11 -- nvmf/common.sh@297 -- # local -ga x722 00:05:46.541 21:20:11 -- nvmf/common.sh@298 -- # mlx=() 00:05:46.541 21:20:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:05:46.541 21:20:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:46.541 21:20:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:46.541 21:20:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:46.541 21:20:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:46.541 21:20:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:46.541 21:20:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:46.541 21:20:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:46.541 21:20:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:46.541 21:20:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:46.541 21:20:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:46.541 21:20:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:46.541 21:20:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:46.541 21:20:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:46.541 21:20:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:46.541 21:20:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:46.541 21:20:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:46.541 21:20:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:46.541 21:20:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:46.541 21:20:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:46.542 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:46.542 21:20:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:46.542 21:20:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:46.542 21:20:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:46.542 21:20:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:46.542 21:20:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:46.542 21:20:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:46.542 21:20:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:46.542 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:46.542 21:20:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:46.542 21:20:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:46.542 21:20:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:46.542 21:20:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:46.542 21:20:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:46.542 21:20:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:46.542 21:20:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:46.542 21:20:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:46.542 21:20:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:46.542 21:20:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:46.542 21:20:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:46.542 21:20:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:46.542 21:20:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:46.542 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:46.542 21:20:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:46.542 21:20:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:46.542 21:20:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:46.542 21:20:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:46.542 21:20:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:46.542 21:20:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:46.542 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:46.542 21:20:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:46.542 21:20:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:05:46.542 21:20:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:05:46.542 21:20:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:05:46.542 21:20:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:05:46.542 21:20:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:05:46.542 21:20:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:46.542 21:20:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:46.542 21:20:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:46.542 21:20:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:46.542 21:20:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:46.542 21:20:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:46.542 21:20:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:46.542 21:20:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:46.542 21:20:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:46.542 21:20:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:46.542 21:20:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:46.542 21:20:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:46.542 21:20:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:46.542 21:20:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:46.542 21:20:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:46.542 21:20:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:46.542 21:20:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:46.542 21:20:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:46.542 21:20:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:46.542 21:20:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:46.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:46.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:05:46.542 00:05:46.542 --- 10.0.0.2 ping statistics --- 00:05:46.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:46.542 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:05:46.542 21:20:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:46.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:46.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:05:46.542 00:05:46.542 --- 10.0.0.1 ping statistics --- 00:05:46.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:46.542 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:05:46.542 21:20:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:46.542 21:20:11 -- nvmf/common.sh@411 -- # return 0 00:05:46.542 21:20:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:05:46.542 21:20:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:46.542 21:20:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:05:46.542 21:20:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:05:46.542 21:20:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:46.542 21:20:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:05:46.542 21:20:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:05:46.542 21:20:11 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:05:46.542 21:20:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:46.542 21:20:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.542 21:20:11 -- common/autotest_common.sh@10 -- # set +x 00:05:46.542 ************************************ 00:05:46.542 START TEST nvmf_filesystem_no_in_capsule 00:05:46.542 ************************************ 00:05:46.542 21:20:11 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:05:46.542 21:20:11 -- target/filesystem.sh@47 -- # in_capsule=0 00:05:46.542 21:20:11 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:05:46.542 21:20:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:05:46.542 21:20:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:46.542 21:20:11 -- common/autotest_common.sh@10 -- # set +x 00:05:46.542 21:20:11 -- nvmf/common.sh@470 -- # nvmfpid=2503067 00:05:46.542 21:20:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:05:46.542 21:20:11 -- nvmf/common.sh@471 -- # waitforlisten 2503067 00:05:46.542 21:20:11 -- common/autotest_common.sh@817 -- # '[' -z 2503067 ']' 00:05:46.542 21:20:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.542 21:20:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:46.542 21:20:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.542 21:20:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:46.542 21:20:11 -- common/autotest_common.sh@10 -- # set +x 00:05:46.542 [2024-04-24 21:20:11.921594] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:46.542 [2024-04-24 21:20:11.921726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:46.542 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.542 [2024-04-24 21:20:11.994143] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.542 [2024-04-24 21:20:12.114841] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:46.542 [2024-04-24 21:20:12.114902] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:46.542 [2024-04-24 21:20:12.114919] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:46.542 [2024-04-24 21:20:12.114942] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:46.542 [2024-04-24 21:20:12.114955] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:46.542 [2024-04-24 21:20:12.115064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.542 [2024-04-24 21:20:12.115134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.542 [2024-04-24 21:20:12.115230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.542 [2024-04-24 21:20:12.115232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.475 21:20:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:47.475 21:20:12 -- common/autotest_common.sh@850 -- # return 0 00:05:47.475 21:20:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:05:47.475 21:20:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:47.475 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:05:47.475 21:20:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:47.475 21:20:12 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:05:47.475 21:20:12 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:05:47.475 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:47.475 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:05:47.475 [2024-04-24 21:20:12.919742] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:47.475 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:47.475 21:20:12 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:05:47.475 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:47.475 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:05:47.475 Malloc1 00:05:47.475 21:20:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:47.475 21:20:13 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:05:47.475 21:20:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:47.475 21:20:13 -- common/autotest_common.sh@10 -- # set +x 00:05:47.475 21:20:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:47.475 21:20:13 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:05:47.475 21:20:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:47.475 21:20:13 -- common/autotest_common.sh@10 -- # set +x 00:05:47.475 21:20:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:47.475 21:20:13 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:47.475 21:20:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:47.475 21:20:13 -- common/autotest_common.sh@10 -- # set +x 00:05:47.475 [2024-04-24 21:20:13.114113] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:47.475 21:20:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:47.475 21:20:13 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:05:47.475 21:20:13 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:05:47.475 21:20:13 -- common/autotest_common.sh@1365 -- # local bdev_info 00:05:47.475 21:20:13 -- common/autotest_common.sh@1366 -- # local bs 00:05:47.475 21:20:13 -- common/autotest_common.sh@1367 -- # local nb 00:05:47.475 21:20:13 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:05:47.475 21:20:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:47.475 21:20:13 -- common/autotest_common.sh@10 -- # set +x 00:05:47.475 21:20:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:47.475 21:20:13 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:05:47.475 { 00:05:47.475 "name": "Malloc1", 00:05:47.475 "aliases": [ 00:05:47.475 "21a639bb-cf11-4851-bc15-1d6b71f95829" 00:05:47.475 ], 00:05:47.475 "product_name": "Malloc disk", 00:05:47.475 "block_size": 512, 00:05:47.475 "num_blocks": 1048576, 00:05:47.475 "uuid": "21a639bb-cf11-4851-bc15-1d6b71f95829", 00:05:47.475 "assigned_rate_limits": { 00:05:47.476 "rw_ios_per_sec": 0, 00:05:47.476 "rw_mbytes_per_sec": 0, 00:05:47.476 "r_mbytes_per_sec": 0, 00:05:47.476 "w_mbytes_per_sec": 0 00:05:47.476 }, 00:05:47.476 "claimed": true, 00:05:47.476 "claim_type": "exclusive_write", 00:05:47.476 "zoned": false, 00:05:47.476 "supported_io_types": { 00:05:47.476 "read": true, 00:05:47.476 "write": true, 00:05:47.476 "unmap": true, 00:05:47.476 "write_zeroes": true, 00:05:47.476 "flush": true, 00:05:47.476 "reset": true, 00:05:47.476 "compare": false, 00:05:47.476 "compare_and_write": false, 00:05:47.476 "abort": true, 00:05:47.476 "nvme_admin": false, 00:05:47.476 "nvme_io": false 00:05:47.476 }, 00:05:47.476 "memory_domains": [ 00:05:47.476 { 00:05:47.476 "dma_device_id": "system", 00:05:47.476 "dma_device_type": 1 00:05:47.476 }, 00:05:47.476 { 00:05:47.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.476 "dma_device_type": 2 00:05:47.476 } 00:05:47.476 ], 00:05:47.476 "driver_specific": {} 00:05:47.476 } 00:05:47.476 ]' 00:05:47.476 21:20:13 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:05:47.734 21:20:13 -- common/autotest_common.sh@1369 -- # bs=512 00:05:47.734 21:20:13 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:05:47.734 21:20:13 -- common/autotest_common.sh@1370 -- # nb=1048576 00:05:47.734 21:20:13 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:05:47.734 21:20:13 -- common/autotest_common.sh@1374 -- # echo 512 00:05:47.734 21:20:13 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:05:47.734 21:20:13 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:05:48.299 21:20:13 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:05:48.299 21:20:13 -- common/autotest_common.sh@1184 -- # local i=0 00:05:48.299 21:20:13 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:05:48.299 21:20:13 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:05:48.299 21:20:13 -- common/autotest_common.sh@1191 -- # sleep 2 00:05:50.197 21:20:15 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:05:50.197 21:20:15 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:05:50.197 21:20:15 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:05:50.197 21:20:15 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:05:50.197 21:20:15 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:05:50.197 21:20:15 -- common/autotest_common.sh@1194 -- # return 0 00:05:50.197 21:20:15 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:05:50.197 21:20:15 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:05:50.197 21:20:15 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:05:50.197 21:20:15 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:05:50.197 21:20:15 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:50.197 21:20:15 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:50.197 21:20:15 -- setup/common.sh@80 -- # echo 536870912 00:05:50.197 21:20:15 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:05:50.197 21:20:15 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:05:50.197 21:20:15 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:05:50.197 21:20:15 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:05:50.455 21:20:16 -- target/filesystem.sh@69 -- # partprobe 00:05:51.019 21:20:16 -- target/filesystem.sh@70 -- # sleep 1 00:05:52.391 21:20:17 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:05:52.391 21:20:17 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:05:52.391 21:20:17 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:52.391 21:20:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.391 21:20:17 -- common/autotest_common.sh@10 -- # set +x 00:05:52.391 ************************************ 00:05:52.391 START TEST filesystem_ext4 00:05:52.391 ************************************ 00:05:52.391 21:20:17 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:05:52.391 21:20:17 -- target/filesystem.sh@18 -- # fstype=ext4 00:05:52.391 21:20:17 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:52.391 21:20:17 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:05:52.391 21:20:17 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:05:52.391 21:20:17 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:05:52.391 21:20:17 -- common/autotest_common.sh@914 -- # local i=0 00:05:52.391 21:20:17 -- common/autotest_common.sh@915 -- # local force 00:05:52.392 21:20:17 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:05:52.392 21:20:17 -- common/autotest_common.sh@918 -- # force=-F 00:05:52.392 21:20:17 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:05:52.392 mke2fs 1.46.5 (30-Dec-2021) 00:05:52.392 Discarding device blocks: 0/522240 done 00:05:52.392 Creating filesystem with 522240 1k blocks and 130560 inodes 00:05:52.392 Filesystem UUID: bab347a6-4891-4937-8f9b-cf85347ec506 00:05:52.392 Superblock backups stored on blocks: 00:05:52.392 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:05:52.392 00:05:52.392 Allocating group tables: 0/64 done 00:05:52.392 Writing inode tables: 0/64 done 00:05:53.324 Creating journal (8192 blocks): done 00:05:53.889 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:05:53.889 00:05:53.889 21:20:19 -- common/autotest_common.sh@931 -- # return 0 00:05:53.889 21:20:19 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:54.822 21:20:20 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:54.822 21:20:20 -- target/filesystem.sh@25 -- # sync 00:05:54.822 21:20:20 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:54.822 21:20:20 -- target/filesystem.sh@27 -- # sync 00:05:54.822 21:20:20 -- target/filesystem.sh@29 -- # i=0 00:05:54.822 21:20:20 -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:54.822 21:20:20 -- target/filesystem.sh@37 -- # kill -0 2503067 00:05:54.822 21:20:20 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:54.822 21:20:20 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:54.822 21:20:20 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:54.822 21:20:20 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:54.822 00:05:54.822 real 0m2.625s 00:05:54.822 user 0m0.019s 00:05:54.822 sys 0m0.055s 00:05:54.822 21:20:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:54.822 21:20:20 -- common/autotest_common.sh@10 -- # set +x 00:05:54.822 ************************************ 00:05:54.822 END TEST filesystem_ext4 00:05:54.822 ************************************ 00:05:54.822 21:20:20 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:05:54.822 21:20:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:54.822 21:20:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.822 21:20:20 -- common/autotest_common.sh@10 -- # set +x 00:05:55.079 ************************************ 00:05:55.079 START TEST filesystem_btrfs 00:05:55.079 ************************************ 00:05:55.079 21:20:20 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:05:55.079 21:20:20 -- target/filesystem.sh@18 -- # fstype=btrfs 00:05:55.079 21:20:20 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:55.079 21:20:20 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:05:55.079 21:20:20 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:05:55.079 21:20:20 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:05:55.079 21:20:20 -- common/autotest_common.sh@914 -- # local i=0 00:05:55.079 21:20:20 -- common/autotest_common.sh@915 -- # local force 00:05:55.079 21:20:20 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:05:55.079 21:20:20 -- common/autotest_common.sh@920 -- # force=-f 00:05:55.079 21:20:20 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:05:55.336 btrfs-progs v6.6.2 00:05:55.336 See https://btrfs.readthedocs.io for more information. 00:05:55.336 00:05:55.336 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:05:55.336 NOTE: several default settings have changed in version 5.15, please make sure 00:05:55.336 this does not affect your deployments: 00:05:55.336 - DUP for metadata (-m dup) 00:05:55.336 - enabled no-holes (-O no-holes) 00:05:55.336 - enabled free-space-tree (-R free-space-tree) 00:05:55.336 00:05:55.336 Label: (null) 00:05:55.336 UUID: 16a80eca-fc2f-4bf2-8737-d9aa9424900d 00:05:55.336 Node size: 16384 00:05:55.336 Sector size: 4096 00:05:55.337 Filesystem size: 510.00MiB 00:05:55.337 Block group profiles: 00:05:55.337 Data: single 8.00MiB 00:05:55.337 Metadata: DUP 32.00MiB 00:05:55.337 System: DUP 8.00MiB 00:05:55.337 SSD detected: yes 00:05:55.337 Zoned device: no 00:05:55.337 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:05:55.337 Runtime features: free-space-tree 00:05:55.337 Checksum: crc32c 00:05:55.337 Number of devices: 1 00:05:55.337 Devices: 00:05:55.337 ID SIZE PATH 00:05:55.337 1 510.00MiB /dev/nvme0n1p1 00:05:55.337 00:05:55.337 21:20:20 -- common/autotest_common.sh@931 -- # return 0 00:05:55.337 21:20:20 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:56.270 21:20:21 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:56.270 21:20:21 -- target/filesystem.sh@25 -- # sync 00:05:56.270 21:20:21 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:56.270 21:20:21 -- target/filesystem.sh@27 -- # sync 00:05:56.270 21:20:21 -- target/filesystem.sh@29 -- # i=0 00:05:56.270 21:20:21 -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:56.270 21:20:21 -- target/filesystem.sh@37 -- # kill -0 2503067 00:05:56.270 21:20:21 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:56.270 21:20:21 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:56.270 21:20:21 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:56.270 21:20:21 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:56.270 00:05:56.270 real 0m1.318s 00:05:56.270 user 0m0.022s 00:05:56.270 sys 0m0.113s 00:05:56.270 21:20:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:56.270 21:20:21 -- common/autotest_common.sh@10 -- # set +x 00:05:56.270 ************************************ 00:05:56.270 END TEST filesystem_btrfs 00:05:56.270 ************************************ 00:05:56.270 21:20:21 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:05:56.270 21:20:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:56.270 21:20:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.270 21:20:21 -- common/autotest_common.sh@10 -- # set +x 00:05:56.270 ************************************ 00:05:56.270 START TEST filesystem_xfs 00:05:56.270 ************************************ 00:05:56.270 21:20:21 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:05:56.270 21:20:21 -- target/filesystem.sh@18 -- # fstype=xfs 00:05:56.270 21:20:21 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:56.270 21:20:21 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:05:56.270 21:20:21 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:05:56.270 21:20:21 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:05:56.270 21:20:21 -- common/autotest_common.sh@914 -- # local i=0 00:05:56.270 21:20:21 -- common/autotest_common.sh@915 -- # local force 00:05:56.270 21:20:21 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:05:56.270 21:20:21 -- common/autotest_common.sh@920 -- # force=-f 00:05:56.270 21:20:21 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:05:56.528 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:05:56.528 = sectsz=512 attr=2, projid32bit=1 00:05:56.528 = crc=1 finobt=1, sparse=1, rmapbt=0 00:05:56.528 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:05:56.528 data = bsize=4096 blocks=130560, imaxpct=25 00:05:56.528 = sunit=0 swidth=0 blks 00:05:56.528 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:05:56.528 log =internal log bsize=4096 blocks=16384, version=2 00:05:56.528 = sectsz=512 sunit=0 blks, lazy-count=1 00:05:56.528 realtime =none extsz=4096 blocks=0, rtextents=0 00:05:57.460 Discarding blocks...Done. 00:05:57.460 21:20:23 -- common/autotest_common.sh@931 -- # return 0 00:05:57.460 21:20:23 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:59.359 21:20:24 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:59.359 21:20:24 -- target/filesystem.sh@25 -- # sync 00:05:59.359 21:20:24 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:59.359 21:20:24 -- target/filesystem.sh@27 -- # sync 00:05:59.359 21:20:24 -- target/filesystem.sh@29 -- # i=0 00:05:59.359 21:20:24 -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:59.359 21:20:24 -- target/filesystem.sh@37 -- # kill -0 2503067 00:05:59.359 21:20:24 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:59.359 21:20:24 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:59.359 21:20:24 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:59.359 21:20:24 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:59.359 00:05:59.359 real 0m3.042s 00:05:59.359 user 0m0.019s 00:05:59.359 sys 0m0.057s 00:05:59.359 21:20:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.359 21:20:24 -- common/autotest_common.sh@10 -- # set +x 00:05:59.359 ************************************ 00:05:59.359 END TEST filesystem_xfs 00:05:59.359 ************************************ 00:05:59.359 21:20:24 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:05:59.618 21:20:25 -- target/filesystem.sh@93 -- # sync 00:05:59.618 21:20:25 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:05:59.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:05:59.618 21:20:25 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:05:59.618 21:20:25 -- common/autotest_common.sh@1205 -- # local i=0 00:05:59.618 21:20:25 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:05:59.618 21:20:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:05:59.618 21:20:25 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:05:59.618 21:20:25 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:05:59.618 21:20:25 -- common/autotest_common.sh@1217 -- # return 0 00:05:59.618 21:20:25 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:59.618 21:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:59.618 21:20:25 -- common/autotest_common.sh@10 -- # set +x 00:05:59.618 21:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:59.618 21:20:25 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:05:59.618 21:20:25 -- target/filesystem.sh@101 -- # killprocess 2503067 00:05:59.618 21:20:25 -- common/autotest_common.sh@936 -- # '[' -z 2503067 ']' 00:05:59.618 21:20:25 -- common/autotest_common.sh@940 -- # kill -0 2503067 00:05:59.618 21:20:25 -- common/autotest_common.sh@941 -- # uname 00:05:59.618 21:20:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:59.618 21:20:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2503067 00:05:59.618 21:20:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:59.618 21:20:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:59.618 21:20:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2503067' 00:05:59.618 killing process with pid 2503067 00:05:59.618 21:20:25 -- common/autotest_common.sh@955 -- # kill 2503067 00:05:59.618 21:20:25 -- common/autotest_common.sh@960 -- # wait 2503067 00:06:00.185 21:20:25 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:00.185 00:06:00.185 real 0m13.799s 00:06:00.185 user 0m53.119s 00:06:00.185 sys 0m2.043s 00:06:00.185 21:20:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:00.185 21:20:25 -- common/autotest_common.sh@10 -- # set +x 00:06:00.185 ************************************ 00:06:00.185 END TEST nvmf_filesystem_no_in_capsule 00:06:00.185 ************************************ 00:06:00.185 21:20:25 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:00.185 21:20:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:00.185 21:20:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.185 21:20:25 -- common/autotest_common.sh@10 -- # set +x 00:06:00.185 ************************************ 00:06:00.185 START TEST nvmf_filesystem_in_capsule 00:06:00.185 ************************************ 00:06:00.185 21:20:25 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:06:00.185 21:20:25 -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:00.185 21:20:25 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:00.185 21:20:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:00.185 21:20:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:00.185 21:20:25 -- common/autotest_common.sh@10 -- # set +x 00:06:00.185 21:20:25 -- nvmf/common.sh@470 -- # nvmfpid=2504923 00:06:00.185 21:20:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:00.185 21:20:25 -- nvmf/common.sh@471 -- # waitforlisten 2504923 00:06:00.185 21:20:25 -- common/autotest_common.sh@817 -- # '[' -z 2504923 ']' 00:06:00.185 21:20:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.185 21:20:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:00.185 21:20:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.185 21:20:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:00.185 21:20:25 -- common/autotest_common.sh@10 -- # set +x 00:06:00.186 [2024-04-24 21:20:25.852073] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:06:00.186 [2024-04-24 21:20:25.852165] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:00.444 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.444 [2024-04-24 21:20:25.924524] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.444 [2024-04-24 21:20:26.043203] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:00.444 [2024-04-24 21:20:26.043260] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:00.444 [2024-04-24 21:20:26.043277] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:00.444 [2024-04-24 21:20:26.043290] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:00.444 [2024-04-24 21:20:26.043302] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:00.444 [2024-04-24 21:20:26.043696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.444 [2024-04-24 21:20:26.043726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.444 [2024-04-24 21:20:26.043781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.444 [2024-04-24 21:20:26.043784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.378 21:20:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:01.378 21:20:26 -- common/autotest_common.sh@850 -- # return 0 00:06:01.378 21:20:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:01.378 21:20:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:01.378 21:20:26 -- common/autotest_common.sh@10 -- # set +x 00:06:01.378 21:20:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:01.378 21:20:26 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:01.378 21:20:26 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:01.378 21:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.378 21:20:26 -- common/autotest_common.sh@10 -- # set +x 00:06:01.378 [2024-04-24 21:20:26.825701] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:01.378 21:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.378 21:20:26 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:01.378 21:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.378 21:20:26 -- common/autotest_common.sh@10 -- # set +x 00:06:01.378 Malloc1 00:06:01.378 21:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.378 21:20:26 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:01.378 21:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.378 21:20:26 -- common/autotest_common.sh@10 -- # set +x 00:06:01.378 21:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.378 21:20:27 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:01.378 21:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.378 21:20:27 -- common/autotest_common.sh@10 -- # set +x 00:06:01.378 21:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.378 21:20:27 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:01.378 21:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.378 21:20:27 -- common/autotest_common.sh@10 -- # set +x 00:06:01.378 [2024-04-24 21:20:27.013199] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:01.378 21:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.378 21:20:27 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:01.378 21:20:27 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:06:01.378 21:20:27 -- common/autotest_common.sh@1365 -- # local bdev_info 00:06:01.378 21:20:27 -- common/autotest_common.sh@1366 -- # local bs 00:06:01.378 21:20:27 -- common/autotest_common.sh@1367 -- # local nb 00:06:01.378 21:20:27 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:01.378 21:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.378 21:20:27 -- common/autotest_common.sh@10 -- # set +x 00:06:01.378 21:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.378 21:20:27 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:06:01.378 { 00:06:01.378 "name": "Malloc1", 00:06:01.378 "aliases": [ 00:06:01.378 "124c5a08-40d5-43a8-8527-7442a6fe16b2" 00:06:01.378 ], 00:06:01.378 "product_name": "Malloc disk", 00:06:01.378 "block_size": 512, 00:06:01.378 "num_blocks": 1048576, 00:06:01.378 "uuid": "124c5a08-40d5-43a8-8527-7442a6fe16b2", 00:06:01.378 "assigned_rate_limits": { 00:06:01.378 "rw_ios_per_sec": 0, 00:06:01.378 "rw_mbytes_per_sec": 0, 00:06:01.378 "r_mbytes_per_sec": 0, 00:06:01.378 "w_mbytes_per_sec": 0 00:06:01.378 }, 00:06:01.378 "claimed": true, 00:06:01.378 "claim_type": "exclusive_write", 00:06:01.378 "zoned": false, 00:06:01.378 "supported_io_types": { 00:06:01.378 "read": true, 00:06:01.378 "write": true, 00:06:01.378 "unmap": true, 00:06:01.378 "write_zeroes": true, 00:06:01.378 "flush": true, 00:06:01.378 "reset": true, 00:06:01.378 "compare": false, 00:06:01.379 "compare_and_write": false, 00:06:01.379 "abort": true, 00:06:01.379 "nvme_admin": false, 00:06:01.379 "nvme_io": false 00:06:01.379 }, 00:06:01.379 "memory_domains": [ 00:06:01.379 { 00:06:01.379 "dma_device_id": "system", 00:06:01.379 "dma_device_type": 1 00:06:01.379 }, 00:06:01.379 { 00:06:01.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.379 "dma_device_type": 2 00:06:01.379 } 00:06:01.379 ], 00:06:01.379 "driver_specific": {} 00:06:01.379 } 00:06:01.379 ]' 00:06:01.379 21:20:27 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:06:01.636 21:20:27 -- common/autotest_common.sh@1369 -- # bs=512 00:06:01.636 21:20:27 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:06:01.636 21:20:27 -- common/autotest_common.sh@1370 -- # nb=1048576 00:06:01.636 21:20:27 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:06:01.636 21:20:27 -- common/autotest_common.sh@1374 -- # echo 512 00:06:01.636 21:20:27 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:01.636 21:20:27 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:02.203 21:20:27 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:02.203 21:20:27 -- common/autotest_common.sh@1184 -- # local i=0 00:06:02.203 21:20:27 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:02.203 21:20:27 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:02.203 21:20:27 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:04.732 21:20:29 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:04.732 21:20:29 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:04.732 21:20:29 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:04.732 21:20:29 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:04.732 21:20:29 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:04.732 21:20:29 -- common/autotest_common.sh@1194 -- # return 0 00:06:04.732 21:20:29 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:04.732 21:20:29 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:04.732 21:20:29 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:04.732 21:20:29 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:04.732 21:20:29 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:04.732 21:20:29 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:04.732 21:20:29 -- setup/common.sh@80 -- # echo 536870912 00:06:04.732 21:20:29 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:04.732 21:20:29 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:04.732 21:20:29 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:04.732 21:20:29 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:04.732 21:20:30 -- target/filesystem.sh@69 -- # partprobe 00:06:05.665 21:20:31 -- target/filesystem.sh@70 -- # sleep 1 00:06:06.601 21:20:32 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:06.601 21:20:32 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:06.601 21:20:32 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:06.601 21:20:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.601 21:20:32 -- common/autotest_common.sh@10 -- # set +x 00:06:06.601 ************************************ 00:06:06.601 START TEST filesystem_in_capsule_ext4 00:06:06.601 ************************************ 00:06:06.601 21:20:32 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:06.601 21:20:32 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:06.601 21:20:32 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:06.601 21:20:32 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:06.601 21:20:32 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:06:06.601 21:20:32 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:06.601 21:20:32 -- common/autotest_common.sh@914 -- # local i=0 00:06:06.601 21:20:32 -- common/autotest_common.sh@915 -- # local force 00:06:06.601 21:20:32 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:06:06.601 21:20:32 -- common/autotest_common.sh@918 -- # force=-F 00:06:06.601 21:20:32 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:06.601 mke2fs 1.46.5 (30-Dec-2021) 00:06:06.859 Discarding device blocks: 0/522240 done 00:06:06.859 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:06.859 Filesystem UUID: 0524fd07-8eaf-4a0f-a7fd-2d0f22bd4a01 00:06:06.859 Superblock backups stored on blocks: 00:06:06.859 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:06.859 00:06:06.859 Allocating group tables: 0/64 done 00:06:06.859 Writing inode tables: 0/64 done 00:06:09.014 Creating journal (8192 blocks): done 00:06:09.836 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:06:09.836 00:06:09.836 21:20:35 -- common/autotest_common.sh@931 -- # return 0 00:06:09.836 21:20:35 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:10.401 21:20:35 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:10.401 21:20:35 -- target/filesystem.sh@25 -- # sync 00:06:10.401 21:20:35 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:10.401 21:20:35 -- target/filesystem.sh@27 -- # sync 00:06:10.401 21:20:35 -- target/filesystem.sh@29 -- # i=0 00:06:10.401 21:20:35 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:10.401 21:20:36 -- target/filesystem.sh@37 -- # kill -0 2504923 00:06:10.401 21:20:36 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:10.401 21:20:36 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:10.401 21:20:36 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:10.401 21:20:36 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:10.401 00:06:10.401 real 0m3.846s 00:06:10.401 user 0m0.023s 00:06:10.401 sys 0m0.056s 00:06:10.401 21:20:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.401 21:20:36 -- common/autotest_common.sh@10 -- # set +x 00:06:10.401 ************************************ 00:06:10.401 END TEST filesystem_in_capsule_ext4 00:06:10.401 ************************************ 00:06:10.401 21:20:36 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:10.401 21:20:36 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:10.401 21:20:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.401 21:20:36 -- common/autotest_common.sh@10 -- # set +x 00:06:10.672 ************************************ 00:06:10.672 START TEST filesystem_in_capsule_btrfs 00:06:10.672 ************************************ 00:06:10.672 21:20:36 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:10.672 21:20:36 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:10.672 21:20:36 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:10.672 21:20:36 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:10.672 21:20:36 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:06:10.672 21:20:36 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:10.672 21:20:36 -- common/autotest_common.sh@914 -- # local i=0 00:06:10.672 21:20:36 -- common/autotest_common.sh@915 -- # local force 00:06:10.672 21:20:36 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:06:10.672 21:20:36 -- common/autotest_common.sh@920 -- # force=-f 00:06:10.672 21:20:36 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:10.932 btrfs-progs v6.6.2 00:06:10.932 See https://btrfs.readthedocs.io for more information. 00:06:10.932 00:06:10.932 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:10.932 NOTE: several default settings have changed in version 5.15, please make sure 00:06:10.932 this does not affect your deployments: 00:06:10.932 - DUP for metadata (-m dup) 00:06:10.932 - enabled no-holes (-O no-holes) 00:06:10.932 - enabled free-space-tree (-R free-space-tree) 00:06:10.932 00:06:10.932 Label: (null) 00:06:10.932 UUID: 51630cfd-e7f6-4b5e-be87-158888d98c4e 00:06:10.932 Node size: 16384 00:06:10.932 Sector size: 4096 00:06:10.932 Filesystem size: 510.00MiB 00:06:10.932 Block group profiles: 00:06:10.932 Data: single 8.00MiB 00:06:10.932 Metadata: DUP 32.00MiB 00:06:10.932 System: DUP 8.00MiB 00:06:10.932 SSD detected: yes 00:06:10.932 Zoned device: no 00:06:10.932 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:10.932 Runtime features: free-space-tree 00:06:10.932 Checksum: crc32c 00:06:10.932 Number of devices: 1 00:06:10.932 Devices: 00:06:10.932 ID SIZE PATH 00:06:10.932 1 510.00MiB /dev/nvme0n1p1 00:06:10.932 00:06:10.932 21:20:36 -- common/autotest_common.sh@931 -- # return 0 00:06:10.932 21:20:36 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:11.864 21:20:37 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:11.864 21:20:37 -- target/filesystem.sh@25 -- # sync 00:06:11.864 21:20:37 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:11.864 21:20:37 -- target/filesystem.sh@27 -- # sync 00:06:11.864 21:20:37 -- target/filesystem.sh@29 -- # i=0 00:06:11.864 21:20:37 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:11.864 21:20:37 -- target/filesystem.sh@37 -- # kill -0 2504923 00:06:11.864 21:20:37 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:11.864 21:20:37 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:11.864 21:20:37 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:11.864 21:20:37 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:11.864 00:06:11.864 real 0m1.377s 00:06:11.864 user 0m0.018s 00:06:11.864 sys 0m0.114s 00:06:11.864 21:20:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:11.864 21:20:37 -- common/autotest_common.sh@10 -- # set +x 00:06:11.864 ************************************ 00:06:11.864 END TEST filesystem_in_capsule_btrfs 00:06:11.864 ************************************ 00:06:12.124 21:20:37 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:12.124 21:20:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:12.124 21:20:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.124 21:20:37 -- common/autotest_common.sh@10 -- # set +x 00:06:12.124 ************************************ 00:06:12.124 START TEST filesystem_in_capsule_xfs 00:06:12.124 ************************************ 00:06:12.124 21:20:37 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:06:12.124 21:20:37 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:12.124 21:20:37 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:12.124 21:20:37 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:12.124 21:20:37 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:06:12.124 21:20:37 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:12.124 21:20:37 -- common/autotest_common.sh@914 -- # local i=0 00:06:12.124 21:20:37 -- common/autotest_common.sh@915 -- # local force 00:06:12.124 21:20:37 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:06:12.124 21:20:37 -- common/autotest_common.sh@920 -- # force=-f 00:06:12.124 21:20:37 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:12.124 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:12.124 = sectsz=512 attr=2, projid32bit=1 00:06:12.124 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:12.124 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:12.124 data = bsize=4096 blocks=130560, imaxpct=25 00:06:12.124 = sunit=0 swidth=0 blks 00:06:12.124 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:12.124 log =internal log bsize=4096 blocks=16384, version=2 00:06:12.124 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:12.124 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:13.062 Discarding blocks...Done. 00:06:13.062 21:20:38 -- common/autotest_common.sh@931 -- # return 0 00:06:13.062 21:20:38 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:15.588 21:20:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:15.588 21:20:40 -- target/filesystem.sh@25 -- # sync 00:06:15.588 21:20:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:15.588 21:20:40 -- target/filesystem.sh@27 -- # sync 00:06:15.588 21:20:40 -- target/filesystem.sh@29 -- # i=0 00:06:15.588 21:20:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:15.588 21:20:40 -- target/filesystem.sh@37 -- # kill -0 2504923 00:06:15.588 21:20:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:15.588 21:20:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:15.588 21:20:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:15.588 21:20:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:15.588 00:06:15.588 real 0m3.224s 00:06:15.588 user 0m0.021s 00:06:15.588 sys 0m0.055s 00:06:15.588 21:20:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.588 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:06:15.588 ************************************ 00:06:15.588 END TEST filesystem_in_capsule_xfs 00:06:15.588 ************************************ 00:06:15.588 21:20:40 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:15.588 21:20:40 -- target/filesystem.sh@93 -- # sync 00:06:15.588 21:20:40 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:15.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:15.588 21:20:40 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:15.588 21:20:40 -- common/autotest_common.sh@1205 -- # local i=0 00:06:15.588 21:20:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:06:15.588 21:20:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:15.589 21:20:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:06:15.589 21:20:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:15.589 21:20:40 -- common/autotest_common.sh@1217 -- # return 0 00:06:15.589 21:20:40 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:15.589 21:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:15.589 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:06:15.589 21:20:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:15.589 21:20:41 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:15.589 21:20:41 -- target/filesystem.sh@101 -- # killprocess 2504923 00:06:15.589 21:20:41 -- common/autotest_common.sh@936 -- # '[' -z 2504923 ']' 00:06:15.589 21:20:41 -- common/autotest_common.sh@940 -- # kill -0 2504923 00:06:15.589 21:20:41 -- common/autotest_common.sh@941 -- # uname 00:06:15.589 21:20:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:15.589 21:20:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2504923 00:06:15.589 21:20:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:15.589 21:20:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:15.589 21:20:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2504923' 00:06:15.589 killing process with pid 2504923 00:06:15.589 21:20:41 -- common/autotest_common.sh@955 -- # kill 2504923 00:06:15.589 21:20:41 -- common/autotest_common.sh@960 -- # wait 2504923 00:06:16.156 21:20:41 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:16.156 00:06:16.156 real 0m15.729s 00:06:16.156 user 1m0.665s 00:06:16.156 sys 0m2.223s 00:06:16.156 21:20:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:16.156 21:20:41 -- common/autotest_common.sh@10 -- # set +x 00:06:16.156 ************************************ 00:06:16.156 END TEST nvmf_filesystem_in_capsule 00:06:16.156 ************************************ 00:06:16.156 21:20:41 -- target/filesystem.sh@108 -- # nvmftestfini 00:06:16.156 21:20:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:16.156 21:20:41 -- nvmf/common.sh@117 -- # sync 00:06:16.156 21:20:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:16.156 21:20:41 -- nvmf/common.sh@120 -- # set +e 00:06:16.156 21:20:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:16.156 21:20:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:16.156 rmmod nvme_tcp 00:06:16.156 rmmod nvme_fabrics 00:06:16.156 rmmod nvme_keyring 00:06:16.156 21:20:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:16.156 21:20:41 -- nvmf/common.sh@124 -- # set -e 00:06:16.156 21:20:41 -- nvmf/common.sh@125 -- # return 0 00:06:16.156 21:20:41 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:06:16.156 21:20:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:16.156 21:20:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:16.156 21:20:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:16.156 21:20:41 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:16.156 21:20:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:16.156 21:20:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.156 21:20:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:16.156 21:20:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:18.061 21:20:43 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:18.061 00:06:18.061 real 0m34.252s 00:06:18.061 user 1m54.735s 00:06:18.061 sys 0m6.009s 00:06:18.061 21:20:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:18.061 21:20:43 -- common/autotest_common.sh@10 -- # set +x 00:06:18.061 ************************************ 00:06:18.061 END TEST nvmf_filesystem 00:06:18.061 ************************************ 00:06:18.061 21:20:43 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:18.061 21:20:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:18.061 21:20:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.061 21:20:43 -- common/autotest_common.sh@10 -- # set +x 00:06:18.320 ************************************ 00:06:18.320 START TEST nvmf_discovery 00:06:18.320 ************************************ 00:06:18.320 21:20:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:18.320 * Looking for test storage... 00:06:18.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:18.320 21:20:43 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:18.320 21:20:43 -- nvmf/common.sh@7 -- # uname -s 00:06:18.320 21:20:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.320 21:20:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.320 21:20:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.320 21:20:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.320 21:20:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.320 21:20:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.320 21:20:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.320 21:20:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.320 21:20:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.320 21:20:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.320 21:20:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:18.320 21:20:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:18.320 21:20:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.320 21:20:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.320 21:20:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:18.320 21:20:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.320 21:20:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:18.320 21:20:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.320 21:20:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.320 21:20:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.320 21:20:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.320 21:20:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.320 21:20:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.320 21:20:43 -- paths/export.sh@5 -- # export PATH 00:06:18.320 21:20:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.320 21:20:43 -- nvmf/common.sh@47 -- # : 0 00:06:18.320 21:20:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:18.320 21:20:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:18.320 21:20:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.320 21:20:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.320 21:20:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.320 21:20:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:18.320 21:20:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:18.320 21:20:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:18.320 21:20:43 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:18.320 21:20:43 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:18.320 21:20:43 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:18.320 21:20:43 -- target/discovery.sh@15 -- # hash nvme 00:06:18.320 21:20:43 -- target/discovery.sh@20 -- # nvmftestinit 00:06:18.320 21:20:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:18.320 21:20:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:18.320 21:20:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:18.321 21:20:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:18.321 21:20:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:18.321 21:20:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:18.321 21:20:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:18.321 21:20:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:18.321 21:20:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:18.321 21:20:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:18.321 21:20:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:18.321 21:20:43 -- common/autotest_common.sh@10 -- # set +x 00:06:20.223 21:20:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:20.223 21:20:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:20.224 21:20:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:20.224 21:20:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:20.224 21:20:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:20.224 21:20:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:20.224 21:20:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:20.224 21:20:45 -- nvmf/common.sh@295 -- # net_devs=() 00:06:20.224 21:20:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:20.224 21:20:45 -- nvmf/common.sh@296 -- # e810=() 00:06:20.224 21:20:45 -- nvmf/common.sh@296 -- # local -ga e810 00:06:20.224 21:20:45 -- nvmf/common.sh@297 -- # x722=() 00:06:20.224 21:20:45 -- nvmf/common.sh@297 -- # local -ga x722 00:06:20.224 21:20:45 -- nvmf/common.sh@298 -- # mlx=() 00:06:20.224 21:20:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:20.224 21:20:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:20.224 21:20:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:20.224 21:20:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:20.224 21:20:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:20.224 21:20:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:20.224 21:20:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:20.224 21:20:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:20.224 21:20:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:20.224 21:20:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:20.224 21:20:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:20.224 21:20:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:20.224 21:20:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:20.224 21:20:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:20.224 21:20:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:20.224 21:20:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:20.224 21:20:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:20.224 21:20:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:20.224 21:20:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:20.224 21:20:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:20.224 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:20.224 21:20:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:20.224 21:20:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:20.224 21:20:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:20.224 21:20:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:20.224 21:20:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:20.224 21:20:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:20.224 21:20:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:20.224 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:20.224 21:20:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:20.224 21:20:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:20.224 21:20:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:20.224 21:20:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:20.224 21:20:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:20.224 21:20:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:20.224 21:20:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:20.224 21:20:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:20.224 21:20:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:20.224 21:20:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:20.224 21:20:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:20.224 21:20:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:20.224 21:20:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:20.224 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:20.224 21:20:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:20.224 21:20:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:20.224 21:20:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:20.224 21:20:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:20.224 21:20:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:20.224 21:20:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:20.224 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:20.224 21:20:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:20.224 21:20:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:20.224 21:20:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:20.224 21:20:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:20.224 21:20:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:20.224 21:20:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:20.224 21:20:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:20.224 21:20:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:20.224 21:20:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:20.224 21:20:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:20.224 21:20:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:20.224 21:20:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:20.224 21:20:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:20.224 21:20:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:20.224 21:20:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:20.224 21:20:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:20.224 21:20:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:20.224 21:20:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:20.224 21:20:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:20.482 21:20:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:20.482 21:20:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:20.482 21:20:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:20.482 21:20:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:20.482 21:20:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:20.482 21:20:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:20.482 21:20:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:20.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:20.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:06:20.483 00:06:20.483 --- 10.0.0.2 ping statistics --- 00:06:20.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.483 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:06:20.483 21:20:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:20.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:20.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:06:20.483 00:06:20.483 --- 10.0.0.1 ping statistics --- 00:06:20.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.483 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:06:20.483 21:20:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:20.483 21:20:46 -- nvmf/common.sh@411 -- # return 0 00:06:20.483 21:20:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:20.483 21:20:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:20.483 21:20:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:20.483 21:20:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:20.483 21:20:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:20.483 21:20:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:20.483 21:20:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:20.483 21:20:46 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:20.483 21:20:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:20.483 21:20:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:20.483 21:20:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.483 21:20:46 -- nvmf/common.sh@470 -- # nvmfpid=2508858 00:06:20.483 21:20:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:20.483 21:20:46 -- nvmf/common.sh@471 -- # waitforlisten 2508858 00:06:20.483 21:20:46 -- common/autotest_common.sh@817 -- # '[' -z 2508858 ']' 00:06:20.483 21:20:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.483 21:20:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:20.483 21:20:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.483 21:20:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:20.483 21:20:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.483 [2024-04-24 21:20:46.086372] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:06:20.483 [2024-04-24 21:20:46.086449] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:20.483 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.483 [2024-04-24 21:20:46.157398] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.741 [2024-04-24 21:20:46.276361] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:20.741 [2024-04-24 21:20:46.276436] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:20.741 [2024-04-24 21:20:46.276453] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:20.741 [2024-04-24 21:20:46.276466] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:20.741 [2024-04-24 21:20:46.276478] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:20.741 [2024-04-24 21:20:46.276574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.741 [2024-04-24 21:20:46.276643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.741 [2024-04-24 21:20:46.276693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.741 [2024-04-24 21:20:46.276694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.676 21:20:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:21.676 21:20:47 -- common/autotest_common.sh@850 -- # return 0 00:06:21.676 21:20:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:21.676 21:20:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:21.676 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.676 21:20:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:21.676 21:20:47 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:21.676 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.676 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.676 [2024-04-24 21:20:47.062730] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.676 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.676 21:20:47 -- target/discovery.sh@26 -- # seq 1 4 00:06:21.676 21:20:47 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:21.676 21:20:47 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:21.676 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.676 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.676 Null1 00:06:21.676 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.676 21:20:47 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:21.676 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.676 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.677 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.677 21:20:47 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:21.677 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.677 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.677 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.677 21:20:47 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:21.677 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.677 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.677 [2024-04-24 21:20:47.103012] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:21.677 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.677 21:20:47 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:21.677 21:20:47 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:21.677 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.677 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.677 Null2 00:06:21.677 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.677 21:20:47 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:21.677 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.677 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.677 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.677 21:20:47 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:21.677 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.677 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.677 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.677 21:20:47 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:21.677 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.677 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.677 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.677 21:20:47 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:21.677 21:20:47 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:21.677 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.677 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.677 Null3 00:06:21.677 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.677 21:20:47 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:21.677 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.677 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.677 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.677 21:20:47 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:21.677 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.677 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.677 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.677 21:20:47 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:21.677 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.677 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.677 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.677 21:20:47 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:21.677 21:20:47 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:21.677 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.677 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.677 Null4 00:06:21.677 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.677 21:20:47 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:21.677 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.677 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.677 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.677 21:20:47 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:21.677 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.677 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.677 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.677 21:20:47 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:21.677 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.677 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.677 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.677 21:20:47 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:21.677 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.677 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.677 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.677 21:20:47 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:21.677 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.677 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.677 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.677 21:20:47 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:06:21.677 00:06:21.677 Discovery Log Number of Records 6, Generation counter 6 00:06:21.677 =====Discovery Log Entry 0====== 00:06:21.677 trtype: tcp 00:06:21.677 adrfam: ipv4 00:06:21.677 subtype: current discovery subsystem 00:06:21.677 treq: not required 00:06:21.677 portid: 0 00:06:21.677 trsvcid: 4420 00:06:21.677 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:21.677 traddr: 10.0.0.2 00:06:21.677 eflags: explicit discovery connections, duplicate discovery information 00:06:21.677 sectype: none 00:06:21.677 =====Discovery Log Entry 1====== 00:06:21.677 trtype: tcp 00:06:21.677 adrfam: ipv4 00:06:21.677 subtype: nvme subsystem 00:06:21.677 treq: not required 00:06:21.677 portid: 0 00:06:21.677 trsvcid: 4420 00:06:21.677 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:21.677 traddr: 10.0.0.2 00:06:21.677 eflags: none 00:06:21.677 sectype: none 00:06:21.677 =====Discovery Log Entry 2====== 00:06:21.677 trtype: tcp 00:06:21.677 adrfam: ipv4 00:06:21.677 subtype: nvme subsystem 00:06:21.677 treq: not required 00:06:21.677 portid: 0 00:06:21.677 trsvcid: 4420 00:06:21.677 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:21.677 traddr: 10.0.0.2 00:06:21.677 eflags: none 00:06:21.677 sectype: none 00:06:21.677 =====Discovery Log Entry 3====== 00:06:21.677 trtype: tcp 00:06:21.677 adrfam: ipv4 00:06:21.677 subtype: nvme subsystem 00:06:21.677 treq: not required 00:06:21.677 portid: 0 00:06:21.677 trsvcid: 4420 00:06:21.677 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:21.677 traddr: 10.0.0.2 00:06:21.677 eflags: none 00:06:21.677 sectype: none 00:06:21.677 =====Discovery Log Entry 4====== 00:06:21.677 trtype: tcp 00:06:21.677 adrfam: ipv4 00:06:21.677 subtype: nvme subsystem 00:06:21.677 treq: not required 00:06:21.677 portid: 0 00:06:21.677 trsvcid: 4420 00:06:21.677 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:21.677 traddr: 10.0.0.2 00:06:21.677 eflags: none 00:06:21.677 sectype: none 00:06:21.677 =====Discovery Log Entry 5====== 00:06:21.677 trtype: tcp 00:06:21.677 adrfam: ipv4 00:06:21.677 subtype: discovery subsystem referral 00:06:21.677 treq: not required 00:06:21.677 portid: 0 00:06:21.677 trsvcid: 4430 00:06:21.677 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:21.677 traddr: 10.0.0.2 00:06:21.677 eflags: none 00:06:21.677 sectype: none 00:06:21.677 21:20:47 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:21.677 Perform nvmf subsystem discovery via RPC 00:06:21.677 21:20:47 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:21.677 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.677 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.677 [2024-04-24 21:20:47.307459] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:06:21.677 [ 00:06:21.677 { 00:06:21.677 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:21.677 "subtype": "Discovery", 00:06:21.677 "listen_addresses": [ 00:06:21.677 { 00:06:21.677 "transport": "TCP", 00:06:21.677 "trtype": "TCP", 00:06:21.677 "adrfam": "IPv4", 00:06:21.677 "traddr": "10.0.0.2", 00:06:21.677 "trsvcid": "4420" 00:06:21.677 } 00:06:21.677 ], 00:06:21.677 "allow_any_host": true, 00:06:21.677 "hosts": [] 00:06:21.677 }, 00:06:21.677 { 00:06:21.678 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:21.678 "subtype": "NVMe", 00:06:21.678 "listen_addresses": [ 00:06:21.678 { 00:06:21.678 "transport": "TCP", 00:06:21.678 "trtype": "TCP", 00:06:21.678 "adrfam": "IPv4", 00:06:21.678 "traddr": "10.0.0.2", 00:06:21.678 "trsvcid": "4420" 00:06:21.678 } 00:06:21.678 ], 00:06:21.678 "allow_any_host": true, 00:06:21.678 "hosts": [], 00:06:21.678 "serial_number": "SPDK00000000000001", 00:06:21.678 "model_number": "SPDK bdev Controller", 00:06:21.678 "max_namespaces": 32, 00:06:21.678 "min_cntlid": 1, 00:06:21.678 "max_cntlid": 65519, 00:06:21.678 "namespaces": [ 00:06:21.678 { 00:06:21.678 "nsid": 1, 00:06:21.678 "bdev_name": "Null1", 00:06:21.678 "name": "Null1", 00:06:21.678 "nguid": "CDAA34C97A844BCDA4A30550EDC7486E", 00:06:21.678 "uuid": "cdaa34c9-7a84-4bcd-a4a3-0550edc7486e" 00:06:21.678 } 00:06:21.678 ] 00:06:21.678 }, 00:06:21.678 { 00:06:21.678 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:21.678 "subtype": "NVMe", 00:06:21.678 "listen_addresses": [ 00:06:21.678 { 00:06:21.678 "transport": "TCP", 00:06:21.678 "trtype": "TCP", 00:06:21.678 "adrfam": "IPv4", 00:06:21.678 "traddr": "10.0.0.2", 00:06:21.678 "trsvcid": "4420" 00:06:21.678 } 00:06:21.678 ], 00:06:21.678 "allow_any_host": true, 00:06:21.678 "hosts": [], 00:06:21.678 "serial_number": "SPDK00000000000002", 00:06:21.678 "model_number": "SPDK bdev Controller", 00:06:21.678 "max_namespaces": 32, 00:06:21.678 "min_cntlid": 1, 00:06:21.678 "max_cntlid": 65519, 00:06:21.678 "namespaces": [ 00:06:21.678 { 00:06:21.678 "nsid": 1, 00:06:21.678 "bdev_name": "Null2", 00:06:21.678 "name": "Null2", 00:06:21.678 "nguid": "79B0BB2DB11C41069A27BEE9D060B914", 00:06:21.678 "uuid": "79b0bb2d-b11c-4106-9a27-bee9d060b914" 00:06:21.678 } 00:06:21.678 ] 00:06:21.678 }, 00:06:21.678 { 00:06:21.678 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:21.678 "subtype": "NVMe", 00:06:21.678 "listen_addresses": [ 00:06:21.678 { 00:06:21.678 "transport": "TCP", 00:06:21.678 "trtype": "TCP", 00:06:21.678 "adrfam": "IPv4", 00:06:21.678 "traddr": "10.0.0.2", 00:06:21.678 "trsvcid": "4420" 00:06:21.678 } 00:06:21.678 ], 00:06:21.678 "allow_any_host": true, 00:06:21.678 "hosts": [], 00:06:21.678 "serial_number": "SPDK00000000000003", 00:06:21.678 "model_number": "SPDK bdev Controller", 00:06:21.678 "max_namespaces": 32, 00:06:21.678 "min_cntlid": 1, 00:06:21.678 "max_cntlid": 65519, 00:06:21.678 "namespaces": [ 00:06:21.678 { 00:06:21.678 "nsid": 1, 00:06:21.678 "bdev_name": "Null3", 00:06:21.678 "name": "Null3", 00:06:21.678 "nguid": "588B458399914291B40208342F55EAFB", 00:06:21.678 "uuid": "588b4583-9991-4291-b402-08342f55eafb" 00:06:21.678 } 00:06:21.678 ] 00:06:21.678 }, 00:06:21.678 { 00:06:21.678 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:21.678 "subtype": "NVMe", 00:06:21.678 "listen_addresses": [ 00:06:21.678 { 00:06:21.678 "transport": "TCP", 00:06:21.678 "trtype": "TCP", 00:06:21.678 "adrfam": "IPv4", 00:06:21.678 "traddr": "10.0.0.2", 00:06:21.678 "trsvcid": "4420" 00:06:21.678 } 00:06:21.678 ], 00:06:21.678 "allow_any_host": true, 00:06:21.678 "hosts": [], 00:06:21.678 "serial_number": "SPDK00000000000004", 00:06:21.678 "model_number": "SPDK bdev Controller", 00:06:21.678 "max_namespaces": 32, 00:06:21.678 "min_cntlid": 1, 00:06:21.678 "max_cntlid": 65519, 00:06:21.678 "namespaces": [ 00:06:21.678 { 00:06:21.678 "nsid": 1, 00:06:21.678 "bdev_name": "Null4", 00:06:21.678 "name": "Null4", 00:06:21.678 "nguid": "37281AD224CC47AB972447CE34CFC424", 00:06:21.678 "uuid": "37281ad2-24cc-47ab-9724-47ce34cfc424" 00:06:21.678 } 00:06:21.678 ] 00:06:21.678 } 00:06:21.678 ] 00:06:21.678 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.678 21:20:47 -- target/discovery.sh@42 -- # seq 1 4 00:06:21.678 21:20:47 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:21.678 21:20:47 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:21.678 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.678 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.678 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.678 21:20:47 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:21.678 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.678 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.678 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.678 21:20:47 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:21.678 21:20:47 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:21.678 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.678 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.678 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.678 21:20:47 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:21.678 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.678 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.936 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.936 21:20:47 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:21.936 21:20:47 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:21.936 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.936 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.936 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.936 21:20:47 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:21.936 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.936 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.936 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.936 21:20:47 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:21.936 21:20:47 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:21.937 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.937 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.937 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.937 21:20:47 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:21.937 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.937 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.937 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.937 21:20:47 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:21.937 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.937 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.937 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.937 21:20:47 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:21.937 21:20:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.937 21:20:47 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:21.937 21:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.937 21:20:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.937 21:20:47 -- target/discovery.sh@49 -- # check_bdevs= 00:06:21.937 21:20:47 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:21.937 21:20:47 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:21.937 21:20:47 -- target/discovery.sh@57 -- # nvmftestfini 00:06:21.937 21:20:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:21.937 21:20:47 -- nvmf/common.sh@117 -- # sync 00:06:21.937 21:20:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:21.937 21:20:47 -- nvmf/common.sh@120 -- # set +e 00:06:21.937 21:20:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:21.937 21:20:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:21.937 rmmod nvme_tcp 00:06:21.937 rmmod nvme_fabrics 00:06:21.937 rmmod nvme_keyring 00:06:21.937 21:20:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:21.937 21:20:47 -- nvmf/common.sh@124 -- # set -e 00:06:21.937 21:20:47 -- nvmf/common.sh@125 -- # return 0 00:06:21.937 21:20:47 -- nvmf/common.sh@478 -- # '[' -n 2508858 ']' 00:06:21.937 21:20:47 -- nvmf/common.sh@479 -- # killprocess 2508858 00:06:21.937 21:20:47 -- common/autotest_common.sh@936 -- # '[' -z 2508858 ']' 00:06:21.937 21:20:47 -- common/autotest_common.sh@940 -- # kill -0 2508858 00:06:21.937 21:20:47 -- common/autotest_common.sh@941 -- # uname 00:06:21.937 21:20:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:21.937 21:20:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2508858 00:06:21.937 21:20:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:21.937 21:20:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:21.937 21:20:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2508858' 00:06:21.937 killing process with pid 2508858 00:06:21.937 21:20:47 -- common/autotest_common.sh@955 -- # kill 2508858 00:06:21.937 [2024-04-24 21:20:47.528258] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:06:21.937 21:20:47 -- common/autotest_common.sh@960 -- # wait 2508858 00:06:22.196 21:20:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:22.196 21:20:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:22.196 21:20:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:22.196 21:20:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:22.196 21:20:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:22.196 21:20:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:22.196 21:20:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:22.196 21:20:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.734 21:20:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:24.734 00:06:24.734 real 0m6.073s 00:06:24.734 user 0m6.967s 00:06:24.734 sys 0m1.895s 00:06:24.734 21:20:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:24.734 21:20:49 -- common/autotest_common.sh@10 -- # set +x 00:06:24.735 ************************************ 00:06:24.735 END TEST nvmf_discovery 00:06:24.735 ************************************ 00:06:24.735 21:20:49 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:24.735 21:20:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:24.735 21:20:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.735 21:20:49 -- common/autotest_common.sh@10 -- # set +x 00:06:24.735 ************************************ 00:06:24.735 START TEST nvmf_referrals 00:06:24.735 ************************************ 00:06:24.735 21:20:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:24.735 * Looking for test storage... 00:06:24.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:24.735 21:20:50 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:24.735 21:20:50 -- nvmf/common.sh@7 -- # uname -s 00:06:24.735 21:20:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.735 21:20:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.735 21:20:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.735 21:20:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.735 21:20:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.735 21:20:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.735 21:20:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.735 21:20:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.735 21:20:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.735 21:20:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.735 21:20:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:24.735 21:20:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:24.735 21:20:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.735 21:20:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.735 21:20:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:24.735 21:20:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.735 21:20:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:24.735 21:20:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.735 21:20:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.735 21:20:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.735 21:20:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.735 21:20:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.735 21:20:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.735 21:20:50 -- paths/export.sh@5 -- # export PATH 00:06:24.735 21:20:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.735 21:20:50 -- nvmf/common.sh@47 -- # : 0 00:06:24.735 21:20:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:24.735 21:20:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:24.735 21:20:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.735 21:20:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.735 21:20:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.735 21:20:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:24.735 21:20:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:24.735 21:20:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:24.735 21:20:50 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:24.735 21:20:50 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:24.735 21:20:50 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:24.735 21:20:50 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:24.735 21:20:50 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:24.735 21:20:50 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:24.735 21:20:50 -- target/referrals.sh@37 -- # nvmftestinit 00:06:24.735 21:20:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:24.735 21:20:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:24.735 21:20:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:24.735 21:20:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:24.735 21:20:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:24.735 21:20:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.735 21:20:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:24.735 21:20:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.735 21:20:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:24.735 21:20:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:24.735 21:20:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:24.735 21:20:50 -- common/autotest_common.sh@10 -- # set +x 00:06:26.639 21:20:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:26.640 21:20:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:26.640 21:20:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:26.640 21:20:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:26.640 21:20:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:26.640 21:20:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:26.640 21:20:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:26.640 21:20:52 -- nvmf/common.sh@295 -- # net_devs=() 00:06:26.640 21:20:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:26.640 21:20:52 -- nvmf/common.sh@296 -- # e810=() 00:06:26.640 21:20:52 -- nvmf/common.sh@296 -- # local -ga e810 00:06:26.640 21:20:52 -- nvmf/common.sh@297 -- # x722=() 00:06:26.640 21:20:52 -- nvmf/common.sh@297 -- # local -ga x722 00:06:26.640 21:20:52 -- nvmf/common.sh@298 -- # mlx=() 00:06:26.640 21:20:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:26.640 21:20:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:26.640 21:20:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:26.640 21:20:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:26.640 21:20:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:26.640 21:20:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:26.640 21:20:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:26.640 21:20:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:26.640 21:20:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:26.640 21:20:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:26.640 21:20:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:26.640 21:20:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:26.640 21:20:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:26.640 21:20:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:26.640 21:20:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:26.640 21:20:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:26.640 21:20:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:26.640 21:20:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:26.640 21:20:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:26.640 21:20:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:26.640 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:26.640 21:20:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:26.640 21:20:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:26.640 21:20:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.640 21:20:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.640 21:20:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:26.640 21:20:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:26.640 21:20:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:26.640 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:26.640 21:20:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:26.640 21:20:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:26.640 21:20:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.640 21:20:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.640 21:20:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:26.640 21:20:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:26.640 21:20:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:26.640 21:20:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:26.640 21:20:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:26.640 21:20:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.640 21:20:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:26.640 21:20:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.640 21:20:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:26.640 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:26.640 21:20:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.640 21:20:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:26.640 21:20:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.640 21:20:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:26.640 21:20:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.640 21:20:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:26.640 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:26.640 21:20:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.640 21:20:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:26.640 21:20:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:26.640 21:20:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:26.640 21:20:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:26.640 21:20:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:26.640 21:20:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:26.640 21:20:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:26.640 21:20:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:26.640 21:20:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:26.640 21:20:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:26.640 21:20:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:26.640 21:20:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:26.640 21:20:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:26.640 21:20:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:26.640 21:20:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:26.640 21:20:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:26.640 21:20:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:26.640 21:20:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:26.640 21:20:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:26.640 21:20:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:26.640 21:20:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:26.640 21:20:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:26.640 21:20:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:26.640 21:20:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:26.640 21:20:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:26.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:26.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:06:26.640 00:06:26.640 --- 10.0.0.2 ping statistics --- 00:06:26.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.640 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:06:26.640 21:20:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:26.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:26.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:06:26.640 00:06:26.640 --- 10.0.0.1 ping statistics --- 00:06:26.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.640 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:06:26.640 21:20:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:26.640 21:20:52 -- nvmf/common.sh@411 -- # return 0 00:06:26.640 21:20:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:26.640 21:20:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:26.640 21:20:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:26.640 21:20:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:26.640 21:20:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:26.640 21:20:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:26.640 21:20:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:26.640 21:20:52 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:26.640 21:20:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:26.640 21:20:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:26.640 21:20:52 -- common/autotest_common.sh@10 -- # set +x 00:06:26.640 21:20:52 -- nvmf/common.sh@470 -- # nvmfpid=2511077 00:06:26.640 21:20:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:26.640 21:20:52 -- nvmf/common.sh@471 -- # waitforlisten 2511077 00:06:26.640 21:20:52 -- common/autotest_common.sh@817 -- # '[' -z 2511077 ']' 00:06:26.640 21:20:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.640 21:20:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:26.640 21:20:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.640 21:20:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:26.640 21:20:52 -- common/autotest_common.sh@10 -- # set +x 00:06:26.640 [2024-04-24 21:20:52.285709] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:06:26.640 [2024-04-24 21:20:52.285798] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:26.900 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.900 [2024-04-24 21:20:52.356443] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.900 [2024-04-24 21:20:52.476074] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:26.900 [2024-04-24 21:20:52.476153] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:26.900 [2024-04-24 21:20:52.476170] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:26.900 [2024-04-24 21:20:52.476182] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:26.900 [2024-04-24 21:20:52.476194] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:26.900 [2024-04-24 21:20:52.476285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.900 [2024-04-24 21:20:52.476348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.900 [2024-04-24 21:20:52.476411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.900 [2024-04-24 21:20:52.476414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.834 21:20:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:27.834 21:20:53 -- common/autotest_common.sh@850 -- # return 0 00:06:27.834 21:20:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:27.834 21:20:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:27.834 21:20:53 -- common/autotest_common.sh@10 -- # set +x 00:06:27.834 21:20:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:27.834 21:20:53 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:27.834 21:20:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.834 21:20:53 -- common/autotest_common.sh@10 -- # set +x 00:06:27.834 [2024-04-24 21:20:53.253759] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:27.834 21:20:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.834 21:20:53 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:27.834 21:20:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.834 21:20:53 -- common/autotest_common.sh@10 -- # set +x 00:06:27.834 [2024-04-24 21:20:53.266003] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:27.834 21:20:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.834 21:20:53 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:27.834 21:20:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.834 21:20:53 -- common/autotest_common.sh@10 -- # set +x 00:06:27.834 21:20:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.834 21:20:53 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:27.834 21:20:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.834 21:20:53 -- common/autotest_common.sh@10 -- # set +x 00:06:27.834 21:20:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.834 21:20:53 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:27.834 21:20:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.834 21:20:53 -- common/autotest_common.sh@10 -- # set +x 00:06:27.834 21:20:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.834 21:20:53 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:27.834 21:20:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.834 21:20:53 -- target/referrals.sh@48 -- # jq length 00:06:27.834 21:20:53 -- common/autotest_common.sh@10 -- # set +x 00:06:27.834 21:20:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.834 21:20:53 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:27.834 21:20:53 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:27.834 21:20:53 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:27.834 21:20:53 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:27.834 21:20:53 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:27.834 21:20:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.834 21:20:53 -- common/autotest_common.sh@10 -- # set +x 00:06:27.834 21:20:53 -- target/referrals.sh@21 -- # sort 00:06:27.834 21:20:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.835 21:20:53 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:27.835 21:20:53 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:27.835 21:20:53 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:27.835 21:20:53 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:27.835 21:20:53 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:27.835 21:20:53 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:27.835 21:20:53 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:27.835 21:20:53 -- target/referrals.sh@26 -- # sort 00:06:27.835 21:20:53 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:27.835 21:20:53 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:27.835 21:20:53 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:27.835 21:20:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.835 21:20:53 -- common/autotest_common.sh@10 -- # set +x 00:06:27.835 21:20:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.835 21:20:53 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:27.835 21:20:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.835 21:20:53 -- common/autotest_common.sh@10 -- # set +x 00:06:27.835 21:20:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.835 21:20:53 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:27.835 21:20:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.835 21:20:53 -- common/autotest_common.sh@10 -- # set +x 00:06:28.094 21:20:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:28.094 21:20:53 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:28.094 21:20:53 -- target/referrals.sh@56 -- # jq length 00:06:28.094 21:20:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:28.094 21:20:53 -- common/autotest_common.sh@10 -- # set +x 00:06:28.094 21:20:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:28.094 21:20:53 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:28.094 21:20:53 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:28.094 21:20:53 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:28.094 21:20:53 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:28.094 21:20:53 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:28.094 21:20:53 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:28.094 21:20:53 -- target/referrals.sh@26 -- # sort 00:06:28.094 21:20:53 -- target/referrals.sh@26 -- # echo 00:06:28.094 21:20:53 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:28.094 21:20:53 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:28.094 21:20:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:28.094 21:20:53 -- common/autotest_common.sh@10 -- # set +x 00:06:28.094 21:20:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:28.094 21:20:53 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:28.094 21:20:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:28.094 21:20:53 -- common/autotest_common.sh@10 -- # set +x 00:06:28.094 21:20:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:28.094 21:20:53 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:28.094 21:20:53 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:28.094 21:20:53 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:28.094 21:20:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:28.094 21:20:53 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:28.094 21:20:53 -- common/autotest_common.sh@10 -- # set +x 00:06:28.094 21:20:53 -- target/referrals.sh@21 -- # sort 00:06:28.094 21:20:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:28.094 21:20:53 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:28.094 21:20:53 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:28.094 21:20:53 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:28.094 21:20:53 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:28.094 21:20:53 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:28.094 21:20:53 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:28.094 21:20:53 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:28.094 21:20:53 -- target/referrals.sh@26 -- # sort 00:06:28.352 21:20:53 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:28.352 21:20:53 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:28.352 21:20:53 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:28.352 21:20:53 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:28.352 21:20:53 -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:28.352 21:20:53 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:28.352 21:20:53 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:28.609 21:20:54 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:28.609 21:20:54 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:28.609 21:20:54 -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:28.609 21:20:54 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:28.609 21:20:54 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:28.609 21:20:54 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:28.609 21:20:54 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:28.609 21:20:54 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:28.609 21:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:28.609 21:20:54 -- common/autotest_common.sh@10 -- # set +x 00:06:28.609 21:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:28.609 21:20:54 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:28.609 21:20:54 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:28.609 21:20:54 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:28.867 21:20:54 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:28.867 21:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:28.867 21:20:54 -- common/autotest_common.sh@10 -- # set +x 00:06:28.867 21:20:54 -- target/referrals.sh@21 -- # sort 00:06:28.867 21:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:28.867 21:20:54 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:28.867 21:20:54 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:28.867 21:20:54 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:28.867 21:20:54 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:28.867 21:20:54 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:28.867 21:20:54 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:28.867 21:20:54 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:28.867 21:20:54 -- target/referrals.sh@26 -- # sort 00:06:28.867 21:20:54 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:28.867 21:20:54 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:28.867 21:20:54 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:28.867 21:20:54 -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:28.867 21:20:54 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:28.867 21:20:54 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:28.867 21:20:54 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:29.125 21:20:54 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:29.125 21:20:54 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:29.125 21:20:54 -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:29.125 21:20:54 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:29.125 21:20:54 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:29.125 21:20:54 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:29.125 21:20:54 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:29.125 21:20:54 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:29.125 21:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.125 21:20:54 -- common/autotest_common.sh@10 -- # set +x 00:06:29.125 21:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.125 21:20:54 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:29.125 21:20:54 -- target/referrals.sh@82 -- # jq length 00:06:29.125 21:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.125 21:20:54 -- common/autotest_common.sh@10 -- # set +x 00:06:29.125 21:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.125 21:20:54 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:29.125 21:20:54 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:29.125 21:20:54 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:29.125 21:20:54 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:29.125 21:20:54 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:29.125 21:20:54 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:29.125 21:20:54 -- target/referrals.sh@26 -- # sort 00:06:29.384 21:20:54 -- target/referrals.sh@26 -- # echo 00:06:29.384 21:20:54 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:29.384 21:20:54 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:29.384 21:20:54 -- target/referrals.sh@86 -- # nvmftestfini 00:06:29.384 21:20:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:29.384 21:20:54 -- nvmf/common.sh@117 -- # sync 00:06:29.384 21:20:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:29.384 21:20:54 -- nvmf/common.sh@120 -- # set +e 00:06:29.384 21:20:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:29.384 21:20:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:29.384 rmmod nvme_tcp 00:06:29.384 rmmod nvme_fabrics 00:06:29.384 rmmod nvme_keyring 00:06:29.384 21:20:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:29.384 21:20:54 -- nvmf/common.sh@124 -- # set -e 00:06:29.384 21:20:54 -- nvmf/common.sh@125 -- # return 0 00:06:29.384 21:20:54 -- nvmf/common.sh@478 -- # '[' -n 2511077 ']' 00:06:29.384 21:20:54 -- nvmf/common.sh@479 -- # killprocess 2511077 00:06:29.384 21:20:54 -- common/autotest_common.sh@936 -- # '[' -z 2511077 ']' 00:06:29.384 21:20:54 -- common/autotest_common.sh@940 -- # kill -0 2511077 00:06:29.384 21:20:54 -- common/autotest_common.sh@941 -- # uname 00:06:29.384 21:20:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:29.384 21:20:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2511077 00:06:29.384 21:20:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:29.384 21:20:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:29.384 21:20:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2511077' 00:06:29.384 killing process with pid 2511077 00:06:29.384 21:20:54 -- common/autotest_common.sh@955 -- # kill 2511077 00:06:29.384 21:20:54 -- common/autotest_common.sh@960 -- # wait 2511077 00:06:29.643 21:20:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:29.643 21:20:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:29.643 21:20:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:29.643 21:20:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:29.643 21:20:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:29.643 21:20:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:29.643 21:20:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:29.644 21:20:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:32.193 21:20:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:32.193 00:06:32.193 real 0m7.295s 00:06:32.193 user 0m12.412s 00:06:32.193 sys 0m2.175s 00:06:32.193 21:20:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:32.193 21:20:57 -- common/autotest_common.sh@10 -- # set +x 00:06:32.193 ************************************ 00:06:32.193 END TEST nvmf_referrals 00:06:32.193 ************************************ 00:06:32.193 21:20:57 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:32.193 21:20:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:32.193 21:20:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.193 21:20:57 -- common/autotest_common.sh@10 -- # set +x 00:06:32.193 ************************************ 00:06:32.193 START TEST nvmf_connect_disconnect 00:06:32.194 ************************************ 00:06:32.194 21:20:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:32.194 * Looking for test storage... 00:06:32.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.194 21:20:57 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:32.194 21:20:57 -- nvmf/common.sh@7 -- # uname -s 00:06:32.194 21:20:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.194 21:20:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.194 21:20:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.194 21:20:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.194 21:20:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.194 21:20:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.194 21:20:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.194 21:20:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.194 21:20:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.194 21:20:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.194 21:20:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:32.194 21:20:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:32.194 21:20:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.194 21:20:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.194 21:20:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:32.194 21:20:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.194 21:20:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:32.194 21:20:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.194 21:20:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.194 21:20:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.194 21:20:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.194 21:20:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.194 21:20:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.194 21:20:57 -- paths/export.sh@5 -- # export PATH 00:06:32.194 21:20:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.194 21:20:57 -- nvmf/common.sh@47 -- # : 0 00:06:32.194 21:20:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:32.194 21:20:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:32.194 21:20:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.194 21:20:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.194 21:20:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.194 21:20:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:32.194 21:20:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:32.194 21:20:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:32.194 21:20:57 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:32.194 21:20:57 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:32.194 21:20:57 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:32.194 21:20:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:32.194 21:20:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:32.194 21:20:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:32.194 21:20:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:32.194 21:20:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:32.194 21:20:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.194 21:20:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:32.194 21:20:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:32.194 21:20:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:32.194 21:20:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:32.194 21:20:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:32.194 21:20:57 -- common/autotest_common.sh@10 -- # set +x 00:06:34.096 21:20:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:34.096 21:20:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:34.096 21:20:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:34.096 21:20:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:34.096 21:20:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:34.096 21:20:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:34.096 21:20:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:34.096 21:20:59 -- nvmf/common.sh@295 -- # net_devs=() 00:06:34.096 21:20:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:34.096 21:20:59 -- nvmf/common.sh@296 -- # e810=() 00:06:34.096 21:20:59 -- nvmf/common.sh@296 -- # local -ga e810 00:06:34.096 21:20:59 -- nvmf/common.sh@297 -- # x722=() 00:06:34.096 21:20:59 -- nvmf/common.sh@297 -- # local -ga x722 00:06:34.096 21:20:59 -- nvmf/common.sh@298 -- # mlx=() 00:06:34.096 21:20:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:34.096 21:20:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:34.096 21:20:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:34.096 21:20:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:34.096 21:20:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:34.096 21:20:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:34.096 21:20:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:34.096 21:20:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:34.096 21:20:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:34.096 21:20:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:34.096 21:20:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:34.096 21:20:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:34.096 21:20:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:34.097 21:20:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:34.097 21:20:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:34.097 21:20:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:34.097 21:20:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:34.097 21:20:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:34.097 21:20:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:34.097 21:20:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:34.097 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:34.097 21:20:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:34.097 21:20:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:34.097 21:20:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:34.097 21:20:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:34.097 21:20:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:34.097 21:20:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:34.097 21:20:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:34.097 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:34.097 21:20:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:34.097 21:20:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:34.097 21:20:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:34.097 21:20:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:34.097 21:20:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:34.097 21:20:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:34.097 21:20:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:34.097 21:20:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:34.097 21:20:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:34.097 21:20:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:34.097 21:20:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:34.097 21:20:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:34.097 21:20:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:34.097 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:34.097 21:20:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:34.097 21:20:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:34.097 21:20:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:34.097 21:20:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:34.097 21:20:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:34.097 21:20:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:34.097 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:34.097 21:20:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:34.097 21:20:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:34.097 21:20:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:34.097 21:20:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:34.097 21:20:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:34.097 21:20:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:34.097 21:20:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:34.097 21:20:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:34.097 21:20:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:34.097 21:20:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:34.097 21:20:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:34.097 21:20:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:34.097 21:20:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:34.097 21:20:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:34.097 21:20:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:34.097 21:20:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:34.097 21:20:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:34.097 21:20:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:34.097 21:20:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:34.097 21:20:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:34.097 21:20:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:34.097 21:20:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:34.097 21:20:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:34.097 21:20:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:34.097 21:20:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:34.097 21:20:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:34.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:34.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:06:34.097 00:06:34.097 --- 10.0.0.2 ping statistics --- 00:06:34.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.097 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:06:34.097 21:20:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:34.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:34.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:06:34.097 00:06:34.097 --- 10.0.0.1 ping statistics --- 00:06:34.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.097 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:06:34.097 21:20:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:34.097 21:20:59 -- nvmf/common.sh@411 -- # return 0 00:06:34.097 21:20:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:34.097 21:20:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:34.097 21:20:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:34.097 21:20:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:34.097 21:20:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:34.097 21:20:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:34.097 21:20:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:34.097 21:20:59 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:34.097 21:20:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:34.097 21:20:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:34.097 21:20:59 -- common/autotest_common.sh@10 -- # set +x 00:06:34.097 21:20:59 -- nvmf/common.sh@470 -- # nvmfpid=2513411 00:06:34.097 21:20:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:34.097 21:20:59 -- nvmf/common.sh@471 -- # waitforlisten 2513411 00:06:34.097 21:20:59 -- common/autotest_common.sh@817 -- # '[' -z 2513411 ']' 00:06:34.097 21:20:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.097 21:20:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:34.097 21:20:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.097 21:20:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:34.097 21:20:59 -- common/autotest_common.sh@10 -- # set +x 00:06:34.097 [2024-04-24 21:20:59.725239] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:06:34.097 [2024-04-24 21:20:59.725315] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.097 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.356 [2024-04-24 21:20:59.793026] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.356 [2024-04-24 21:20:59.903264] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:34.356 [2024-04-24 21:20:59.903327] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:34.356 [2024-04-24 21:20:59.903355] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:34.356 [2024-04-24 21:20:59.903366] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:34.356 [2024-04-24 21:20:59.903376] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:34.356 [2024-04-24 21:20:59.903452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.356 [2024-04-24 21:20:59.903509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.356 [2024-04-24 21:20:59.903576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.356 [2024-04-24 21:20:59.903579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.622 21:21:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:34.623 21:21:00 -- common/autotest_common.sh@850 -- # return 0 00:06:34.623 21:21:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:34.623 21:21:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:34.623 21:21:00 -- common/autotest_common.sh@10 -- # set +x 00:06:34.623 21:21:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:34.623 21:21:00 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:34.623 21:21:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.623 21:21:00 -- common/autotest_common.sh@10 -- # set +x 00:06:34.623 [2024-04-24 21:21:00.062347] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.623 21:21:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.623 21:21:00 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:34.623 21:21:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.623 21:21:00 -- common/autotest_common.sh@10 -- # set +x 00:06:34.623 21:21:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.623 21:21:00 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:34.623 21:21:00 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:34.623 21:21:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.623 21:21:00 -- common/autotest_common.sh@10 -- # set +x 00:06:34.623 21:21:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.623 21:21:00 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:34.623 21:21:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.623 21:21:00 -- common/autotest_common.sh@10 -- # set +x 00:06:34.623 21:21:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.623 21:21:00 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:34.623 21:21:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.623 21:21:00 -- common/autotest_common.sh@10 -- # set +x 00:06:34.623 [2024-04-24 21:21:00.120500] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:34.623 21:21:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.623 21:21:00 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:34.623 21:21:00 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:34.623 21:21:00 -- target/connect_disconnect.sh@34 -- # set +x 00:06:37.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:40.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:43.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:45.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:48.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:48.899 21:21:13 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:06:48.899 21:21:13 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:06:48.899 21:21:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:48.899 21:21:13 -- nvmf/common.sh@117 -- # sync 00:06:48.899 21:21:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:48.899 21:21:13 -- nvmf/common.sh@120 -- # set +e 00:06:48.899 21:21:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:48.899 21:21:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:48.899 rmmod nvme_tcp 00:06:48.899 rmmod nvme_fabrics 00:06:48.899 rmmod nvme_keyring 00:06:48.899 21:21:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:48.899 21:21:13 -- nvmf/common.sh@124 -- # set -e 00:06:48.899 21:21:13 -- nvmf/common.sh@125 -- # return 0 00:06:48.899 21:21:13 -- nvmf/common.sh@478 -- # '[' -n 2513411 ']' 00:06:48.899 21:21:13 -- nvmf/common.sh@479 -- # killprocess 2513411 00:06:48.899 21:21:13 -- common/autotest_common.sh@936 -- # '[' -z 2513411 ']' 00:06:48.899 21:21:13 -- common/autotest_common.sh@940 -- # kill -0 2513411 00:06:48.899 21:21:13 -- common/autotest_common.sh@941 -- # uname 00:06:48.899 21:21:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:48.899 21:21:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2513411 00:06:48.899 21:21:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:48.899 21:21:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:48.899 21:21:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2513411' 00:06:48.899 killing process with pid 2513411 00:06:48.899 21:21:13 -- common/autotest_common.sh@955 -- # kill 2513411 00:06:48.899 21:21:13 -- common/autotest_common.sh@960 -- # wait 2513411 00:06:48.899 21:21:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:48.899 21:21:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:48.899 21:21:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:48.899 21:21:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:48.899 21:21:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:48.899 21:21:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.899 21:21:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:48.899 21:21:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.804 21:21:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:50.804 00:06:50.804 real 0m18.879s 00:06:50.804 user 0m56.435s 00:06:50.804 sys 0m3.530s 00:06:50.804 21:21:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:50.804 21:21:16 -- common/autotest_common.sh@10 -- # set +x 00:06:50.804 ************************************ 00:06:50.804 END TEST nvmf_connect_disconnect 00:06:50.804 ************************************ 00:06:50.804 21:21:16 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:50.804 21:21:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:50.804 21:21:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.804 21:21:16 -- common/autotest_common.sh@10 -- # set +x 00:06:50.804 ************************************ 00:06:50.804 START TEST nvmf_multitarget 00:06:50.804 ************************************ 00:06:50.804 21:21:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:50.804 * Looking for test storage... 00:06:51.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:51.063 21:21:16 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:51.063 21:21:16 -- nvmf/common.sh@7 -- # uname -s 00:06:51.063 21:21:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.063 21:21:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.063 21:21:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.063 21:21:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.063 21:21:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.063 21:21:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.063 21:21:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.063 21:21:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.063 21:21:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.063 21:21:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.063 21:21:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:51.063 21:21:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:51.063 21:21:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.063 21:21:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.063 21:21:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:51.063 21:21:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.063 21:21:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:51.063 21:21:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.063 21:21:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.063 21:21:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.063 21:21:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.063 21:21:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.063 21:21:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.063 21:21:16 -- paths/export.sh@5 -- # export PATH 00:06:51.063 21:21:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.064 21:21:16 -- nvmf/common.sh@47 -- # : 0 00:06:51.064 21:21:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:51.064 21:21:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:51.064 21:21:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.064 21:21:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.064 21:21:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.064 21:21:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:51.064 21:21:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:51.064 21:21:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:51.064 21:21:16 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:06:51.064 21:21:16 -- target/multitarget.sh@15 -- # nvmftestinit 00:06:51.064 21:21:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:51.064 21:21:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:51.064 21:21:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:51.064 21:21:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:51.064 21:21:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:51.064 21:21:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.064 21:21:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:51.064 21:21:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.064 21:21:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:51.064 21:21:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:51.064 21:21:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:51.064 21:21:16 -- common/autotest_common.sh@10 -- # set +x 00:06:52.967 21:21:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:52.967 21:21:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:52.967 21:21:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:52.967 21:21:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:52.967 21:21:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:52.967 21:21:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:52.967 21:21:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:52.967 21:21:18 -- nvmf/common.sh@295 -- # net_devs=() 00:06:52.967 21:21:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:52.967 21:21:18 -- nvmf/common.sh@296 -- # e810=() 00:06:52.967 21:21:18 -- nvmf/common.sh@296 -- # local -ga e810 00:06:52.967 21:21:18 -- nvmf/common.sh@297 -- # x722=() 00:06:52.967 21:21:18 -- nvmf/common.sh@297 -- # local -ga x722 00:06:52.967 21:21:18 -- nvmf/common.sh@298 -- # mlx=() 00:06:52.967 21:21:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:52.967 21:21:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:52.967 21:21:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:52.967 21:21:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:52.967 21:21:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:52.967 21:21:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:52.967 21:21:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:52.967 21:21:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:52.967 21:21:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:52.967 21:21:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:52.967 21:21:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:52.967 21:21:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:52.967 21:21:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:52.967 21:21:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:52.967 21:21:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:52.967 21:21:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:52.967 21:21:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:52.967 21:21:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:52.967 21:21:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:52.967 21:21:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:52.967 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:52.967 21:21:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:52.967 21:21:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:52.967 21:21:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.967 21:21:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.967 21:21:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:52.967 21:21:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:52.967 21:21:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:52.967 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:52.967 21:21:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:52.967 21:21:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:52.967 21:21:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.967 21:21:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.967 21:21:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:52.967 21:21:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:52.967 21:21:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:52.967 21:21:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:52.967 21:21:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:52.967 21:21:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.967 21:21:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:52.967 21:21:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.967 21:21:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:52.967 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:52.967 21:21:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.967 21:21:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:52.967 21:21:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.967 21:21:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:52.967 21:21:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.967 21:21:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:52.967 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:52.967 21:21:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.967 21:21:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:52.967 21:21:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:52.967 21:21:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:52.967 21:21:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:52.967 21:21:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:52.967 21:21:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:52.967 21:21:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:52.967 21:21:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:52.967 21:21:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:52.967 21:21:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:52.967 21:21:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:52.967 21:21:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:52.967 21:21:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:52.967 21:21:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:52.967 21:21:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:52.967 21:21:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:52.967 21:21:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:52.967 21:21:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:52.967 21:21:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:52.967 21:21:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:52.967 21:21:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:52.967 21:21:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:52.967 21:21:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:52.967 21:21:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:52.967 21:21:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:53.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:53.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:06:53.227 00:06:53.227 --- 10.0.0.2 ping statistics --- 00:06:53.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.227 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:06:53.227 21:21:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:53.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:53.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:06:53.227 00:06:53.227 --- 10.0.0.1 ping statistics --- 00:06:53.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.227 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:06:53.227 21:21:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:53.227 21:21:18 -- nvmf/common.sh@411 -- # return 0 00:06:53.227 21:21:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:53.227 21:21:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:53.227 21:21:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:53.227 21:21:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:53.227 21:21:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:53.227 21:21:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:53.227 21:21:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:53.227 21:21:18 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:06:53.227 21:21:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:53.227 21:21:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:53.227 21:21:18 -- common/autotest_common.sh@10 -- # set +x 00:06:53.227 21:21:18 -- nvmf/common.sh@470 -- # nvmfpid=2517781 00:06:53.227 21:21:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:53.227 21:21:18 -- nvmf/common.sh@471 -- # waitforlisten 2517781 00:06:53.227 21:21:18 -- common/autotest_common.sh@817 -- # '[' -z 2517781 ']' 00:06:53.227 21:21:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.227 21:21:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:53.227 21:21:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.227 21:21:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:53.227 21:21:18 -- common/autotest_common.sh@10 -- # set +x 00:06:53.227 [2024-04-24 21:21:18.728170] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:06:53.227 [2024-04-24 21:21:18.728247] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.227 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.227 [2024-04-24 21:21:18.798744] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:53.486 [2024-04-24 21:21:18.918819] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.486 [2024-04-24 21:21:18.918875] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.486 [2024-04-24 21:21:18.918891] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.486 [2024-04-24 21:21:18.918905] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.486 [2024-04-24 21:21:18.918916] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.486 [2024-04-24 21:21:18.919014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.486 [2024-04-24 21:21:18.919074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.486 [2024-04-24 21:21:18.919129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.486 [2024-04-24 21:21:18.919131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.051 21:21:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:54.051 21:21:19 -- common/autotest_common.sh@850 -- # return 0 00:06:54.051 21:21:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:54.051 21:21:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:54.051 21:21:19 -- common/autotest_common.sh@10 -- # set +x 00:06:54.051 21:21:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:54.051 21:21:19 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:06:54.051 21:21:19 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:54.051 21:21:19 -- target/multitarget.sh@21 -- # jq length 00:06:54.308 21:21:19 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:06:54.308 21:21:19 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:06:54.308 "nvmf_tgt_1" 00:06:54.308 21:21:19 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:06:54.567 "nvmf_tgt_2" 00:06:54.567 21:21:20 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:54.567 21:21:20 -- target/multitarget.sh@28 -- # jq length 00:06:54.567 21:21:20 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:06:54.567 21:21:20 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:06:54.824 true 00:06:54.824 21:21:20 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:06:54.824 true 00:06:54.824 21:21:20 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:54.824 21:21:20 -- target/multitarget.sh@35 -- # jq length 00:06:55.083 21:21:20 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:06:55.083 21:21:20 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:55.083 21:21:20 -- target/multitarget.sh@41 -- # nvmftestfini 00:06:55.083 21:21:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:55.083 21:21:20 -- nvmf/common.sh@117 -- # sync 00:06:55.083 21:21:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:55.083 21:21:20 -- nvmf/common.sh@120 -- # set +e 00:06:55.083 21:21:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:55.083 21:21:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:55.083 rmmod nvme_tcp 00:06:55.083 rmmod nvme_fabrics 00:06:55.083 rmmod nvme_keyring 00:06:55.083 21:21:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:55.083 21:21:20 -- nvmf/common.sh@124 -- # set -e 00:06:55.083 21:21:20 -- nvmf/common.sh@125 -- # return 0 00:06:55.083 21:21:20 -- nvmf/common.sh@478 -- # '[' -n 2517781 ']' 00:06:55.083 21:21:20 -- nvmf/common.sh@479 -- # killprocess 2517781 00:06:55.083 21:21:20 -- common/autotest_common.sh@936 -- # '[' -z 2517781 ']' 00:06:55.083 21:21:20 -- common/autotest_common.sh@940 -- # kill -0 2517781 00:06:55.083 21:21:20 -- common/autotest_common.sh@941 -- # uname 00:06:55.083 21:21:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:55.083 21:21:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2517781 00:06:55.083 21:21:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:55.083 21:21:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:55.083 21:21:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2517781' 00:06:55.083 killing process with pid 2517781 00:06:55.083 21:21:20 -- common/autotest_common.sh@955 -- # kill 2517781 00:06:55.083 21:21:20 -- common/autotest_common.sh@960 -- # wait 2517781 00:06:55.342 21:21:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:55.342 21:21:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:55.342 21:21:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:55.342 21:21:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:55.342 21:21:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:55.342 21:21:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.342 21:21:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:55.342 21:21:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.246 21:21:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:57.505 00:06:57.505 real 0m6.494s 00:06:57.505 user 0m9.416s 00:06:57.505 sys 0m1.958s 00:06:57.505 21:21:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:57.505 21:21:22 -- common/autotest_common.sh@10 -- # set +x 00:06:57.505 ************************************ 00:06:57.505 END TEST nvmf_multitarget 00:06:57.505 ************************************ 00:06:57.505 21:21:22 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:57.505 21:21:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:57.505 21:21:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.505 21:21:22 -- common/autotest_common.sh@10 -- # set +x 00:06:57.505 ************************************ 00:06:57.505 START TEST nvmf_rpc 00:06:57.505 ************************************ 00:06:57.505 21:21:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:57.505 * Looking for test storage... 00:06:57.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.505 21:21:23 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.505 21:21:23 -- nvmf/common.sh@7 -- # uname -s 00:06:57.505 21:21:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.505 21:21:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.505 21:21:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.505 21:21:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.505 21:21:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.505 21:21:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.505 21:21:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.505 21:21:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.505 21:21:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.505 21:21:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.505 21:21:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:57.505 21:21:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:57.505 21:21:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.505 21:21:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.505 21:21:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:57.505 21:21:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.505 21:21:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:57.505 21:21:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.505 21:21:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.505 21:21:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.505 21:21:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.505 21:21:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.505 21:21:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.505 21:21:23 -- paths/export.sh@5 -- # export PATH 00:06:57.505 21:21:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.505 21:21:23 -- nvmf/common.sh@47 -- # : 0 00:06:57.505 21:21:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:57.505 21:21:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:57.505 21:21:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.505 21:21:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.505 21:21:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.505 21:21:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:57.505 21:21:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:57.505 21:21:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:57.505 21:21:23 -- target/rpc.sh@11 -- # loops=5 00:06:57.505 21:21:23 -- target/rpc.sh@23 -- # nvmftestinit 00:06:57.505 21:21:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:57.505 21:21:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:57.505 21:21:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:57.505 21:21:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:57.505 21:21:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:57.505 21:21:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.505 21:21:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:57.505 21:21:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.505 21:21:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:57.505 21:21:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:57.505 21:21:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:57.505 21:21:23 -- common/autotest_common.sh@10 -- # set +x 00:07:00.040 21:21:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:00.040 21:21:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:00.040 21:21:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:00.040 21:21:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:00.040 21:21:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:00.040 21:21:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:00.040 21:21:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:00.040 21:21:25 -- nvmf/common.sh@295 -- # net_devs=() 00:07:00.040 21:21:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:00.040 21:21:25 -- nvmf/common.sh@296 -- # e810=() 00:07:00.040 21:21:25 -- nvmf/common.sh@296 -- # local -ga e810 00:07:00.040 21:21:25 -- nvmf/common.sh@297 -- # x722=() 00:07:00.040 21:21:25 -- nvmf/common.sh@297 -- # local -ga x722 00:07:00.040 21:21:25 -- nvmf/common.sh@298 -- # mlx=() 00:07:00.040 21:21:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:00.040 21:21:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:00.040 21:21:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:00.040 21:21:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:00.040 21:21:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:00.040 21:21:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:00.040 21:21:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:00.040 21:21:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:00.040 21:21:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:00.040 21:21:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:00.040 21:21:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:00.040 21:21:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:00.040 21:21:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:00.040 21:21:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:00.040 21:21:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:00.040 21:21:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:00.040 21:21:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:00.040 21:21:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:00.040 21:21:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.040 21:21:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:00.040 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:00.040 21:21:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.040 21:21:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.040 21:21:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.040 21:21:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.040 21:21:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.040 21:21:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.040 21:21:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:00.040 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:00.040 21:21:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.040 21:21:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.040 21:21:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.040 21:21:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.040 21:21:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.040 21:21:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:00.040 21:21:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:00.040 21:21:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:00.040 21:21:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.040 21:21:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.040 21:21:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:00.040 21:21:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.040 21:21:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:00.040 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:00.040 21:21:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.040 21:21:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.040 21:21:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.040 21:21:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:00.040 21:21:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.040 21:21:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:00.040 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:00.040 21:21:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.040 21:21:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:00.040 21:21:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:00.040 21:21:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:00.040 21:21:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:00.040 21:21:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:00.040 21:21:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.040 21:21:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:00.040 21:21:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:00.040 21:21:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:00.040 21:21:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:00.040 21:21:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:00.040 21:21:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:00.040 21:21:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:00.040 21:21:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.040 21:21:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:00.040 21:21:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:00.040 21:21:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:00.040 21:21:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:00.040 21:21:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:00.040 21:21:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:00.040 21:21:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:00.040 21:21:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:00.040 21:21:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:00.040 21:21:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:00.040 21:21:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:00.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:07:00.040 00:07:00.040 --- 10.0.0.2 ping statistics --- 00:07:00.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.040 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:07:00.040 21:21:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:00.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:07:00.040 00:07:00.040 --- 10.0.0.1 ping statistics --- 00:07:00.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.040 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:07:00.040 21:21:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.040 21:21:25 -- nvmf/common.sh@411 -- # return 0 00:07:00.040 21:21:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:00.040 21:21:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.040 21:21:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:00.040 21:21:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:00.040 21:21:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.040 21:21:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:00.040 21:21:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:00.040 21:21:25 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:00.040 21:21:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:00.040 21:21:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:00.040 21:21:25 -- common/autotest_common.sh@10 -- # set +x 00:07:00.040 21:21:25 -- nvmf/common.sh@470 -- # nvmfpid=2520018 00:07:00.041 21:21:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:00.041 21:21:25 -- nvmf/common.sh@471 -- # waitforlisten 2520018 00:07:00.041 21:21:25 -- common/autotest_common.sh@817 -- # '[' -z 2520018 ']' 00:07:00.041 21:21:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.041 21:21:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:00.041 21:21:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.041 21:21:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:00.041 21:21:25 -- common/autotest_common.sh@10 -- # set +x 00:07:00.041 [2024-04-24 21:21:25.353884] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:07:00.041 [2024-04-24 21:21:25.353973] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.041 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.041 [2024-04-24 21:21:25.433262] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.041 [2024-04-24 21:21:25.557382] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:00.041 [2024-04-24 21:21:25.557445] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:00.041 [2024-04-24 21:21:25.557461] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:00.041 [2024-04-24 21:21:25.557475] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:00.041 [2024-04-24 21:21:25.557487] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:00.041 [2024-04-24 21:21:25.557547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.041 [2024-04-24 21:21:25.557599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.041 [2024-04-24 21:21:25.557658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.041 [2024-04-24 21:21:25.557663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.041 21:21:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:00.041 21:21:25 -- common/autotest_common.sh@850 -- # return 0 00:07:00.041 21:21:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:00.041 21:21:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:00.041 21:21:25 -- common/autotest_common.sh@10 -- # set +x 00:07:00.300 21:21:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:00.300 21:21:25 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:00.300 21:21:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.300 21:21:25 -- common/autotest_common.sh@10 -- # set +x 00:07:00.300 21:21:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.300 21:21:25 -- target/rpc.sh@26 -- # stats='{ 00:07:00.300 "tick_rate": 2700000000, 00:07:00.300 "poll_groups": [ 00:07:00.300 { 00:07:00.300 "name": "nvmf_tgt_poll_group_0", 00:07:00.300 "admin_qpairs": 0, 00:07:00.300 "io_qpairs": 0, 00:07:00.300 "current_admin_qpairs": 0, 00:07:00.300 "current_io_qpairs": 0, 00:07:00.300 "pending_bdev_io": 0, 00:07:00.300 "completed_nvme_io": 0, 00:07:00.300 "transports": [] 00:07:00.300 }, 00:07:00.300 { 00:07:00.300 "name": "nvmf_tgt_poll_group_1", 00:07:00.300 "admin_qpairs": 0, 00:07:00.300 "io_qpairs": 0, 00:07:00.300 "current_admin_qpairs": 0, 00:07:00.300 "current_io_qpairs": 0, 00:07:00.300 "pending_bdev_io": 0, 00:07:00.300 "completed_nvme_io": 0, 00:07:00.300 "transports": [] 00:07:00.300 }, 00:07:00.300 { 00:07:00.300 "name": "nvmf_tgt_poll_group_2", 00:07:00.300 "admin_qpairs": 0, 00:07:00.300 "io_qpairs": 0, 00:07:00.300 "current_admin_qpairs": 0, 00:07:00.300 "current_io_qpairs": 0, 00:07:00.300 "pending_bdev_io": 0, 00:07:00.300 "completed_nvme_io": 0, 00:07:00.300 "transports": [] 00:07:00.300 }, 00:07:00.300 { 00:07:00.300 "name": "nvmf_tgt_poll_group_3", 00:07:00.300 "admin_qpairs": 0, 00:07:00.300 "io_qpairs": 0, 00:07:00.300 "current_admin_qpairs": 0, 00:07:00.300 "current_io_qpairs": 0, 00:07:00.300 "pending_bdev_io": 0, 00:07:00.300 "completed_nvme_io": 0, 00:07:00.300 "transports": [] 00:07:00.300 } 00:07:00.300 ] 00:07:00.300 }' 00:07:00.300 21:21:25 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:00.300 21:21:25 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:00.300 21:21:25 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:00.300 21:21:25 -- target/rpc.sh@15 -- # wc -l 00:07:00.300 21:21:25 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:00.300 21:21:25 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:00.300 21:21:25 -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:00.300 21:21:25 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:00.300 21:21:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.300 21:21:25 -- common/autotest_common.sh@10 -- # set +x 00:07:00.300 [2024-04-24 21:21:25.827966] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.300 21:21:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.300 21:21:25 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:00.300 21:21:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.300 21:21:25 -- common/autotest_common.sh@10 -- # set +x 00:07:00.300 21:21:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.300 21:21:25 -- target/rpc.sh@33 -- # stats='{ 00:07:00.300 "tick_rate": 2700000000, 00:07:00.300 "poll_groups": [ 00:07:00.300 { 00:07:00.300 "name": "nvmf_tgt_poll_group_0", 00:07:00.300 "admin_qpairs": 0, 00:07:00.300 "io_qpairs": 0, 00:07:00.300 "current_admin_qpairs": 0, 00:07:00.300 "current_io_qpairs": 0, 00:07:00.300 "pending_bdev_io": 0, 00:07:00.300 "completed_nvme_io": 0, 00:07:00.300 "transports": [ 00:07:00.300 { 00:07:00.300 "trtype": "TCP" 00:07:00.300 } 00:07:00.300 ] 00:07:00.300 }, 00:07:00.300 { 00:07:00.300 "name": "nvmf_tgt_poll_group_1", 00:07:00.300 "admin_qpairs": 0, 00:07:00.300 "io_qpairs": 0, 00:07:00.301 "current_admin_qpairs": 0, 00:07:00.301 "current_io_qpairs": 0, 00:07:00.301 "pending_bdev_io": 0, 00:07:00.301 "completed_nvme_io": 0, 00:07:00.301 "transports": [ 00:07:00.301 { 00:07:00.301 "trtype": "TCP" 00:07:00.301 } 00:07:00.301 ] 00:07:00.301 }, 00:07:00.301 { 00:07:00.301 "name": "nvmf_tgt_poll_group_2", 00:07:00.301 "admin_qpairs": 0, 00:07:00.301 "io_qpairs": 0, 00:07:00.301 "current_admin_qpairs": 0, 00:07:00.301 "current_io_qpairs": 0, 00:07:00.301 "pending_bdev_io": 0, 00:07:00.301 "completed_nvme_io": 0, 00:07:00.301 "transports": [ 00:07:00.301 { 00:07:00.301 "trtype": "TCP" 00:07:00.301 } 00:07:00.301 ] 00:07:00.301 }, 00:07:00.301 { 00:07:00.301 "name": "nvmf_tgt_poll_group_3", 00:07:00.301 "admin_qpairs": 0, 00:07:00.301 "io_qpairs": 0, 00:07:00.301 "current_admin_qpairs": 0, 00:07:00.301 "current_io_qpairs": 0, 00:07:00.301 "pending_bdev_io": 0, 00:07:00.301 "completed_nvme_io": 0, 00:07:00.301 "transports": [ 00:07:00.301 { 00:07:00.301 "trtype": "TCP" 00:07:00.301 } 00:07:00.301 ] 00:07:00.301 } 00:07:00.301 ] 00:07:00.301 }' 00:07:00.301 21:21:25 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:00.301 21:21:25 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:00.301 21:21:25 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:00.301 21:21:25 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:00.301 21:21:25 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:00.301 21:21:25 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:00.301 21:21:25 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:00.301 21:21:25 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:00.301 21:21:25 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:00.301 21:21:25 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:00.301 21:21:25 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:00.301 21:21:25 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:00.301 21:21:25 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:00.301 21:21:25 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:00.301 21:21:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.301 21:21:25 -- common/autotest_common.sh@10 -- # set +x 00:07:00.301 Malloc1 00:07:00.301 21:21:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.301 21:21:25 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:00.301 21:21:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.301 21:21:25 -- common/autotest_common.sh@10 -- # set +x 00:07:00.301 21:21:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.301 21:21:25 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:00.301 21:21:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.301 21:21:25 -- common/autotest_common.sh@10 -- # set +x 00:07:00.301 21:21:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.301 21:21:25 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:00.301 21:21:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.301 21:21:25 -- common/autotest_common.sh@10 -- # set +x 00:07:00.301 21:21:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.301 21:21:25 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:00.301 21:21:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.301 21:21:25 -- common/autotest_common.sh@10 -- # set +x 00:07:00.301 [2024-04-24 21:21:25.975842] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:00.560 21:21:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.560 21:21:25 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:00.560 21:21:25 -- common/autotest_common.sh@638 -- # local es=0 00:07:00.560 21:21:25 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:00.560 21:21:25 -- common/autotest_common.sh@626 -- # local arg=nvme 00:07:00.560 21:21:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:00.560 21:21:25 -- common/autotest_common.sh@630 -- # type -t nvme 00:07:00.560 21:21:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:00.560 21:21:25 -- common/autotest_common.sh@632 -- # type -P nvme 00:07:00.560 21:21:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:00.560 21:21:25 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:07:00.560 21:21:25 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:07:00.560 21:21:25 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:00.560 [2024-04-24 21:21:25.998285] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:00.560 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:00.560 could not add new controller: failed to write to nvme-fabrics device 00:07:00.560 21:21:26 -- common/autotest_common.sh@641 -- # es=1 00:07:00.560 21:21:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:00.560 21:21:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:00.560 21:21:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:00.560 21:21:26 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:00.560 21:21:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.560 21:21:26 -- common/autotest_common.sh@10 -- # set +x 00:07:00.560 21:21:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.560 21:21:26 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:01.127 21:21:26 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:01.127 21:21:26 -- common/autotest_common.sh@1184 -- # local i=0 00:07:01.127 21:21:26 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:01.127 21:21:26 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:01.127 21:21:26 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:03.029 21:21:28 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:03.029 21:21:28 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:03.029 21:21:28 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:03.029 21:21:28 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:03.029 21:21:28 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:03.029 21:21:28 -- common/autotest_common.sh@1194 -- # return 0 00:07:03.029 21:21:28 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:03.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:03.297 21:21:28 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:03.297 21:21:28 -- common/autotest_common.sh@1205 -- # local i=0 00:07:03.297 21:21:28 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:03.297 21:21:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:03.297 21:21:28 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:03.297 21:21:28 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:03.297 21:21:28 -- common/autotest_common.sh@1217 -- # return 0 00:07:03.297 21:21:28 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:03.297 21:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:03.297 21:21:28 -- common/autotest_common.sh@10 -- # set +x 00:07:03.297 21:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:03.297 21:21:28 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:03.298 21:21:28 -- common/autotest_common.sh@638 -- # local es=0 00:07:03.298 21:21:28 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:03.298 21:21:28 -- common/autotest_common.sh@626 -- # local arg=nvme 00:07:03.298 21:21:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:03.298 21:21:28 -- common/autotest_common.sh@630 -- # type -t nvme 00:07:03.298 21:21:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:03.298 21:21:28 -- common/autotest_common.sh@632 -- # type -P nvme 00:07:03.298 21:21:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:03.298 21:21:28 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:07:03.298 21:21:28 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:07:03.298 21:21:28 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:03.298 [2024-04-24 21:21:28.799553] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:03.298 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:03.298 could not add new controller: failed to write to nvme-fabrics device 00:07:03.298 21:21:28 -- common/autotest_common.sh@641 -- # es=1 00:07:03.298 21:21:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:03.298 21:21:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:03.298 21:21:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:03.298 21:21:28 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:03.298 21:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:03.298 21:21:28 -- common/autotest_common.sh@10 -- # set +x 00:07:03.298 21:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:03.298 21:21:28 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:03.902 21:21:29 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:03.902 21:21:29 -- common/autotest_common.sh@1184 -- # local i=0 00:07:03.902 21:21:29 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:03.902 21:21:29 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:03.902 21:21:29 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:06.428 21:21:31 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:06.428 21:21:31 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:06.428 21:21:31 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:06.428 21:21:31 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:06.428 21:21:31 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:06.428 21:21:31 -- common/autotest_common.sh@1194 -- # return 0 00:07:06.428 21:21:31 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:06.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:06.428 21:21:31 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:06.428 21:21:31 -- common/autotest_common.sh@1205 -- # local i=0 00:07:06.428 21:21:31 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:06.428 21:21:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:06.428 21:21:31 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:06.428 21:21:31 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:06.428 21:21:31 -- common/autotest_common.sh@1217 -- # return 0 00:07:06.428 21:21:31 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:06.428 21:21:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.429 21:21:31 -- common/autotest_common.sh@10 -- # set +x 00:07:06.429 21:21:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.429 21:21:31 -- target/rpc.sh@81 -- # seq 1 5 00:07:06.429 21:21:31 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:06.429 21:21:31 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:06.429 21:21:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.429 21:21:31 -- common/autotest_common.sh@10 -- # set +x 00:07:06.429 21:21:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.429 21:21:31 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:06.429 21:21:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.429 21:21:31 -- common/autotest_common.sh@10 -- # set +x 00:07:06.429 [2024-04-24 21:21:31.623644] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.429 21:21:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.429 21:21:31 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:06.429 21:21:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.429 21:21:31 -- common/autotest_common.sh@10 -- # set +x 00:07:06.429 21:21:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.429 21:21:31 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:06.429 21:21:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.429 21:21:31 -- common/autotest_common.sh@10 -- # set +x 00:07:06.429 21:21:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.429 21:21:31 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:06.687 21:21:32 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:06.687 21:21:32 -- common/autotest_common.sh@1184 -- # local i=0 00:07:06.687 21:21:32 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:06.687 21:21:32 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:06.687 21:21:32 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:08.583 21:21:34 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:08.583 21:21:34 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:08.583 21:21:34 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:08.583 21:21:34 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:08.583 21:21:34 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:08.583 21:21:34 -- common/autotest_common.sh@1194 -- # return 0 00:07:08.583 21:21:34 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:08.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:08.841 21:21:34 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:08.841 21:21:34 -- common/autotest_common.sh@1205 -- # local i=0 00:07:08.847 21:21:34 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:08.847 21:21:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:08.847 21:21:34 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:08.847 21:21:34 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:08.847 21:21:34 -- common/autotest_common.sh@1217 -- # return 0 00:07:08.847 21:21:34 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.847 21:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.847 21:21:34 -- common/autotest_common.sh@10 -- # set +x 00:07:08.847 21:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.847 21:21:34 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:08.847 21:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.847 21:21:34 -- common/autotest_common.sh@10 -- # set +x 00:07:08.847 21:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.847 21:21:34 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:08.847 21:21:34 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:08.847 21:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.847 21:21:34 -- common/autotest_common.sh@10 -- # set +x 00:07:08.847 21:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.847 21:21:34 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:08.847 21:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.847 21:21:34 -- common/autotest_common.sh@10 -- # set +x 00:07:08.847 [2024-04-24 21:21:34.415683] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.847 21:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.847 21:21:34 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:08.847 21:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.847 21:21:34 -- common/autotest_common.sh@10 -- # set +x 00:07:08.847 21:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.847 21:21:34 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:08.847 21:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.847 21:21:34 -- common/autotest_common.sh@10 -- # set +x 00:07:08.847 21:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.847 21:21:34 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:09.781 21:21:35 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:09.781 21:21:35 -- common/autotest_common.sh@1184 -- # local i=0 00:07:09.781 21:21:35 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:09.781 21:21:35 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:09.781 21:21:35 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:11.679 21:21:37 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:11.679 21:21:37 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:11.679 21:21:37 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:11.679 21:21:37 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:11.679 21:21:37 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:11.679 21:21:37 -- common/autotest_common.sh@1194 -- # return 0 00:07:11.679 21:21:37 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:11.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:11.679 21:21:37 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:11.679 21:21:37 -- common/autotest_common.sh@1205 -- # local i=0 00:07:11.679 21:21:37 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:11.679 21:21:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.679 21:21:37 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:11.679 21:21:37 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.679 21:21:37 -- common/autotest_common.sh@1217 -- # return 0 00:07:11.680 21:21:37 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.680 21:21:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.680 21:21:37 -- common/autotest_common.sh@10 -- # set +x 00:07:11.680 21:21:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.680 21:21:37 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:11.680 21:21:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.680 21:21:37 -- common/autotest_common.sh@10 -- # set +x 00:07:11.680 21:21:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.680 21:21:37 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:11.680 21:21:37 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:11.680 21:21:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.680 21:21:37 -- common/autotest_common.sh@10 -- # set +x 00:07:11.680 21:21:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.680 21:21:37 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.680 21:21:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.680 21:21:37 -- common/autotest_common.sh@10 -- # set +x 00:07:11.680 [2024-04-24 21:21:37.229383] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.680 21:21:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.680 21:21:37 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:11.680 21:21:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.680 21:21:37 -- common/autotest_common.sh@10 -- # set +x 00:07:11.680 21:21:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.680 21:21:37 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:11.680 21:21:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.680 21:21:37 -- common/autotest_common.sh@10 -- # set +x 00:07:11.680 21:21:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.680 21:21:37 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:12.611 21:21:37 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:12.611 21:21:37 -- common/autotest_common.sh@1184 -- # local i=0 00:07:12.611 21:21:37 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:12.611 21:21:37 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:12.611 21:21:37 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:14.518 21:21:39 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:14.518 21:21:39 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:14.518 21:21:39 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:14.518 21:21:39 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:14.518 21:21:39 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:14.518 21:21:39 -- common/autotest_common.sh@1194 -- # return 0 00:07:14.518 21:21:39 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:14.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:14.518 21:21:40 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:14.518 21:21:40 -- common/autotest_common.sh@1205 -- # local i=0 00:07:14.518 21:21:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:14.518 21:21:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:14.518 21:21:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:14.518 21:21:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:14.518 21:21:40 -- common/autotest_common.sh@1217 -- # return 0 00:07:14.518 21:21:40 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.518 21:21:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.518 21:21:40 -- common/autotest_common.sh@10 -- # set +x 00:07:14.518 21:21:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.518 21:21:40 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:14.518 21:21:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.518 21:21:40 -- common/autotest_common.sh@10 -- # set +x 00:07:14.518 21:21:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.518 21:21:40 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:14.518 21:21:40 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:14.518 21:21:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.518 21:21:40 -- common/autotest_common.sh@10 -- # set +x 00:07:14.518 21:21:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.518 21:21:40 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.519 21:21:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.519 21:21:40 -- common/autotest_common.sh@10 -- # set +x 00:07:14.519 [2024-04-24 21:21:40.054662] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.519 21:21:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.519 21:21:40 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:14.519 21:21:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.519 21:21:40 -- common/autotest_common.sh@10 -- # set +x 00:07:14.519 21:21:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.519 21:21:40 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:14.519 21:21:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.519 21:21:40 -- common/autotest_common.sh@10 -- # set +x 00:07:14.519 21:21:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.519 21:21:40 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:15.084 21:21:40 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:15.084 21:21:40 -- common/autotest_common.sh@1184 -- # local i=0 00:07:15.084 21:21:40 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:15.084 21:21:40 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:15.084 21:21:40 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:17.610 21:21:42 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:17.610 21:21:42 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:17.610 21:21:42 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:17.610 21:21:42 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:17.610 21:21:42 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:17.610 21:21:42 -- common/autotest_common.sh@1194 -- # return 0 00:07:17.610 21:21:42 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:17.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:17.610 21:21:42 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:17.610 21:21:42 -- common/autotest_common.sh@1205 -- # local i=0 00:07:17.610 21:21:42 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:17.610 21:21:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.610 21:21:42 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:17.610 21:21:42 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.610 21:21:42 -- common/autotest_common.sh@1217 -- # return 0 00:07:17.610 21:21:42 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:17.610 21:21:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.610 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:17.610 21:21:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.610 21:21:42 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:17.610 21:21:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.610 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:17.610 21:21:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.610 21:21:42 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:17.610 21:21:42 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:17.610 21:21:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.610 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:17.610 21:21:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.610 21:21:42 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:17.610 21:21:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.610 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:17.610 [2024-04-24 21:21:42.864545] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.610 21:21:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.610 21:21:42 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:17.610 21:21:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.610 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:17.610 21:21:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.610 21:21:42 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:17.610 21:21:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.610 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:07:17.610 21:21:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.610 21:21:42 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:17.868 21:21:43 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:17.868 21:21:43 -- common/autotest_common.sh@1184 -- # local i=0 00:07:17.868 21:21:43 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:17.868 21:21:43 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:17.868 21:21:43 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:20.396 21:21:45 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:20.396 21:21:45 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:20.396 21:21:45 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:20.396 21:21:45 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:20.396 21:21:45 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:20.396 21:21:45 -- common/autotest_common.sh@1194 -- # return 0 00:07:20.396 21:21:45 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:20.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:20.396 21:21:45 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:20.396 21:21:45 -- common/autotest_common.sh@1205 -- # local i=0 00:07:20.396 21:21:45 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:20.396 21:21:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:20.396 21:21:45 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:20.396 21:21:45 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:20.396 21:21:45 -- common/autotest_common.sh@1217 -- # return 0 00:07:20.396 21:21:45 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.396 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.396 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.396 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.396 21:21:45 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:20.396 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.396 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.396 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.396 21:21:45 -- target/rpc.sh@99 -- # seq 1 5 00:07:20.396 21:21:45 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:20.396 21:21:45 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:20.396 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.396 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.396 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.396 21:21:45 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.396 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.396 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.396 [2024-04-24 21:21:45.606719] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.396 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.396 21:21:45 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:20.396 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.396 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.396 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.396 21:21:45 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:20.396 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.396 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.396 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.396 21:21:45 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.396 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.396 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.396 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.396 21:21:45 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:20.396 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.396 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.396 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.396 21:21:45 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:20.396 21:21:45 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:20.396 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.396 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.396 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.396 21:21:45 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.396 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.396 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.396 [2024-04-24 21:21:45.654832] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.396 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.396 21:21:45 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:20.396 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.396 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.396 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.396 21:21:45 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:20.396 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.396 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.396 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.396 21:21:45 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.396 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.396 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.396 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.396 21:21:45 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:20.396 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.396 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.396 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.396 21:21:45 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:20.396 21:21:45 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:20.396 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.396 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.396 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.396 21:21:45 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.396 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.396 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.396 [2024-04-24 21:21:45.702985] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.396 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.397 21:21:45 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:20.397 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.397 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.397 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.397 21:21:45 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:20.397 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.397 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.397 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.397 21:21:45 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.397 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.397 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.397 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.397 21:21:45 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:20.397 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.397 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.397 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.397 21:21:45 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:20.397 21:21:45 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:20.397 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.397 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.397 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.397 21:21:45 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.397 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.397 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.397 [2024-04-24 21:21:45.751117] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.397 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.397 21:21:45 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:20.397 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.397 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.397 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.397 21:21:45 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:20.397 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.397 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.397 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.397 21:21:45 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.397 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.397 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.397 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.397 21:21:45 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:20.397 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.397 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.397 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.397 21:21:45 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:20.397 21:21:45 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:20.397 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.397 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.397 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.397 21:21:45 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.397 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.397 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.397 [2024-04-24 21:21:45.799275] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.397 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.397 21:21:45 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:20.397 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.397 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.397 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.397 21:21:45 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:20.397 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.397 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.397 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.397 21:21:45 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.397 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.397 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.397 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.397 21:21:45 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:20.397 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.397 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.397 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.397 21:21:45 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:20.397 21:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.397 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.397 21:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.397 21:21:45 -- target/rpc.sh@110 -- # stats='{ 00:07:20.397 "tick_rate": 2700000000, 00:07:20.397 "poll_groups": [ 00:07:20.397 { 00:07:20.397 "name": "nvmf_tgt_poll_group_0", 00:07:20.397 "admin_qpairs": 2, 00:07:20.397 "io_qpairs": 84, 00:07:20.397 "current_admin_qpairs": 0, 00:07:20.397 "current_io_qpairs": 0, 00:07:20.397 "pending_bdev_io": 0, 00:07:20.397 "completed_nvme_io": 232, 00:07:20.397 "transports": [ 00:07:20.397 { 00:07:20.397 "trtype": "TCP" 00:07:20.397 } 00:07:20.397 ] 00:07:20.397 }, 00:07:20.397 { 00:07:20.397 "name": "nvmf_tgt_poll_group_1", 00:07:20.397 "admin_qpairs": 2, 00:07:20.397 "io_qpairs": 84, 00:07:20.397 "current_admin_qpairs": 0, 00:07:20.397 "current_io_qpairs": 0, 00:07:20.397 "pending_bdev_io": 0, 00:07:20.397 "completed_nvme_io": 134, 00:07:20.397 "transports": [ 00:07:20.397 { 00:07:20.397 "trtype": "TCP" 00:07:20.397 } 00:07:20.397 ] 00:07:20.397 }, 00:07:20.397 { 00:07:20.397 "name": "nvmf_tgt_poll_group_2", 00:07:20.397 "admin_qpairs": 1, 00:07:20.397 "io_qpairs": 84, 00:07:20.397 "current_admin_qpairs": 0, 00:07:20.397 "current_io_qpairs": 0, 00:07:20.397 "pending_bdev_io": 0, 00:07:20.397 "completed_nvme_io": 232, 00:07:20.397 "transports": [ 00:07:20.397 { 00:07:20.397 "trtype": "TCP" 00:07:20.397 } 00:07:20.397 ] 00:07:20.397 }, 00:07:20.397 { 00:07:20.397 "name": "nvmf_tgt_poll_group_3", 00:07:20.397 "admin_qpairs": 2, 00:07:20.397 "io_qpairs": 84, 00:07:20.397 "current_admin_qpairs": 0, 00:07:20.397 "current_io_qpairs": 0, 00:07:20.397 "pending_bdev_io": 0, 00:07:20.397 "completed_nvme_io": 88, 00:07:20.397 "transports": [ 00:07:20.397 { 00:07:20.397 "trtype": "TCP" 00:07:20.397 } 00:07:20.397 ] 00:07:20.397 } 00:07:20.397 ] 00:07:20.397 }' 00:07:20.397 21:21:45 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:20.397 21:21:45 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:20.397 21:21:45 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:20.397 21:21:45 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:20.397 21:21:45 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:20.397 21:21:45 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:20.397 21:21:45 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:20.397 21:21:45 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:20.397 21:21:45 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:20.397 21:21:45 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:20.397 21:21:45 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:20.397 21:21:45 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:20.397 21:21:45 -- target/rpc.sh@123 -- # nvmftestfini 00:07:20.397 21:21:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:20.397 21:21:45 -- nvmf/common.sh@117 -- # sync 00:07:20.397 21:21:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:20.397 21:21:45 -- nvmf/common.sh@120 -- # set +e 00:07:20.397 21:21:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:20.397 21:21:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:20.397 rmmod nvme_tcp 00:07:20.397 rmmod nvme_fabrics 00:07:20.397 rmmod nvme_keyring 00:07:20.397 21:21:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:20.397 21:21:46 -- nvmf/common.sh@124 -- # set -e 00:07:20.397 21:21:46 -- nvmf/common.sh@125 -- # return 0 00:07:20.397 21:21:46 -- nvmf/common.sh@478 -- # '[' -n 2520018 ']' 00:07:20.397 21:21:46 -- nvmf/common.sh@479 -- # killprocess 2520018 00:07:20.397 21:21:46 -- common/autotest_common.sh@936 -- # '[' -z 2520018 ']' 00:07:20.397 21:21:46 -- common/autotest_common.sh@940 -- # kill -0 2520018 00:07:20.397 21:21:46 -- common/autotest_common.sh@941 -- # uname 00:07:20.397 21:21:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:20.397 21:21:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2520018 00:07:20.397 21:21:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:20.397 21:21:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:20.397 21:21:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2520018' 00:07:20.397 killing process with pid 2520018 00:07:20.397 21:21:46 -- common/autotest_common.sh@955 -- # kill 2520018 00:07:20.397 21:21:46 -- common/autotest_common.sh@960 -- # wait 2520018 00:07:20.964 21:21:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:20.964 21:21:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:20.964 21:21:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:20.964 21:21:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:20.964 21:21:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:20.964 21:21:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.964 21:21:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:20.964 21:21:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.867 21:21:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:22.867 00:07:22.867 real 0m25.357s 00:07:22.867 user 1m22.250s 00:07:22.867 sys 0m4.085s 00:07:22.867 21:21:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:22.867 21:21:48 -- common/autotest_common.sh@10 -- # set +x 00:07:22.867 ************************************ 00:07:22.867 END TEST nvmf_rpc 00:07:22.867 ************************************ 00:07:22.867 21:21:48 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:22.867 21:21:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:22.867 21:21:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.867 21:21:48 -- common/autotest_common.sh@10 -- # set +x 00:07:22.867 ************************************ 00:07:22.867 START TEST nvmf_invalid 00:07:22.867 ************************************ 00:07:22.867 21:21:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:23.126 * Looking for test storage... 00:07:23.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.126 21:21:48 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:23.126 21:21:48 -- nvmf/common.sh@7 -- # uname -s 00:07:23.126 21:21:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.126 21:21:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.126 21:21:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.126 21:21:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.126 21:21:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.126 21:21:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.126 21:21:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.126 21:21:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.126 21:21:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.126 21:21:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.126 21:21:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:23.126 21:21:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:23.126 21:21:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.126 21:21:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.126 21:21:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:23.126 21:21:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.126 21:21:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.126 21:21:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.126 21:21:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.126 21:21:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.126 21:21:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.126 21:21:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.126 21:21:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.126 21:21:48 -- paths/export.sh@5 -- # export PATH 00:07:23.126 21:21:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.126 21:21:48 -- nvmf/common.sh@47 -- # : 0 00:07:23.126 21:21:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:23.126 21:21:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:23.126 21:21:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.126 21:21:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.126 21:21:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.126 21:21:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:23.126 21:21:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:23.126 21:21:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:23.126 21:21:48 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:23.126 21:21:48 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.126 21:21:48 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:23.126 21:21:48 -- target/invalid.sh@14 -- # target=foobar 00:07:23.126 21:21:48 -- target/invalid.sh@16 -- # RANDOM=0 00:07:23.126 21:21:48 -- target/invalid.sh@34 -- # nvmftestinit 00:07:23.126 21:21:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:23.126 21:21:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.126 21:21:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:23.126 21:21:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:23.126 21:21:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:23.126 21:21:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.126 21:21:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:23.126 21:21:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.126 21:21:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:23.126 21:21:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:23.126 21:21:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:23.126 21:21:48 -- common/autotest_common.sh@10 -- # set +x 00:07:25.028 21:21:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:25.028 21:21:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:25.028 21:21:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:25.028 21:21:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:25.028 21:21:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:25.028 21:21:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:25.028 21:21:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:25.028 21:21:50 -- nvmf/common.sh@295 -- # net_devs=() 00:07:25.028 21:21:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:25.028 21:21:50 -- nvmf/common.sh@296 -- # e810=() 00:07:25.028 21:21:50 -- nvmf/common.sh@296 -- # local -ga e810 00:07:25.028 21:21:50 -- nvmf/common.sh@297 -- # x722=() 00:07:25.028 21:21:50 -- nvmf/common.sh@297 -- # local -ga x722 00:07:25.028 21:21:50 -- nvmf/common.sh@298 -- # mlx=() 00:07:25.028 21:21:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:25.028 21:21:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.028 21:21:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.028 21:21:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.028 21:21:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.028 21:21:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.028 21:21:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.028 21:21:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.028 21:21:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.028 21:21:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.028 21:21:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.028 21:21:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.028 21:21:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:25.028 21:21:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:25.028 21:21:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:25.028 21:21:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:25.028 21:21:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:25.028 21:21:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:25.028 21:21:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:25.028 21:21:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:25.028 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:25.028 21:21:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:25.028 21:21:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:25.028 21:21:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.028 21:21:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.028 21:21:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:25.028 21:21:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:25.028 21:21:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:25.028 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:25.028 21:21:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:25.028 21:21:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:25.028 21:21:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.028 21:21:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.028 21:21:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:25.028 21:21:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:25.029 21:21:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:25.029 21:21:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:25.029 21:21:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:25.029 21:21:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.029 21:21:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:25.029 21:21:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.029 21:21:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:25.029 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:25.029 21:21:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.029 21:21:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:25.029 21:21:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.029 21:21:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:25.029 21:21:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.029 21:21:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:25.029 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:25.029 21:21:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.029 21:21:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:25.029 21:21:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:25.029 21:21:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:25.029 21:21:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:25.029 21:21:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:25.029 21:21:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.029 21:21:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.029 21:21:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:25.029 21:21:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:25.029 21:21:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:25.029 21:21:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:25.029 21:21:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:25.029 21:21:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:25.029 21:21:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.029 21:21:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:25.029 21:21:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:25.029 21:21:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:25.029 21:21:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:25.287 21:21:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:25.287 21:21:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:25.287 21:21:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:25.287 21:21:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:25.287 21:21:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:25.287 21:21:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:25.287 21:21:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:25.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:07:25.287 00:07:25.287 --- 10.0.0.2 ping statistics --- 00:07:25.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.287 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:07:25.287 21:21:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:25.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:07:25.287 00:07:25.287 --- 10.0.0.1 ping statistics --- 00:07:25.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.287 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:07:25.287 21:21:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.287 21:21:50 -- nvmf/common.sh@411 -- # return 0 00:07:25.287 21:21:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:25.287 21:21:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.287 21:21:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:25.287 21:21:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:25.287 21:21:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.287 21:21:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:25.287 21:21:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:25.287 21:21:50 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:25.287 21:21:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:25.287 21:21:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:25.287 21:21:50 -- common/autotest_common.sh@10 -- # set +x 00:07:25.287 21:21:50 -- nvmf/common.sh@470 -- # nvmfpid=2524527 00:07:25.287 21:21:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:25.287 21:21:50 -- nvmf/common.sh@471 -- # waitforlisten 2524527 00:07:25.287 21:21:50 -- common/autotest_common.sh@817 -- # '[' -z 2524527 ']' 00:07:25.287 21:21:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.287 21:21:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:25.287 21:21:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.287 21:21:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:25.287 21:21:50 -- common/autotest_common.sh@10 -- # set +x 00:07:25.287 [2024-04-24 21:21:50.900021] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:07:25.287 [2024-04-24 21:21:50.900116] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.287 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.545 [2024-04-24 21:21:50.974760] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.545 [2024-04-24 21:21:51.084259] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.545 [2024-04-24 21:21:51.084311] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.545 [2024-04-24 21:21:51.084326] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.545 [2024-04-24 21:21:51.084338] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.545 [2024-04-24 21:21:51.084349] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.545 [2024-04-24 21:21:51.084411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.545 [2024-04-24 21:21:51.084456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.545 [2024-04-24 21:21:51.084487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.545 [2024-04-24 21:21:51.084489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.545 21:21:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:25.545 21:21:51 -- common/autotest_common.sh@850 -- # return 0 00:07:25.545 21:21:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:25.545 21:21:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:25.545 21:21:51 -- common/autotest_common.sh@10 -- # set +x 00:07:25.803 21:21:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.803 21:21:51 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:25.803 21:21:51 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode19945 00:07:25.803 [2024-04-24 21:21:51.453008] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:25.803 21:21:51 -- target/invalid.sh@40 -- # out='request: 00:07:25.803 { 00:07:25.803 "nqn": "nqn.2016-06.io.spdk:cnode19945", 00:07:25.803 "tgt_name": "foobar", 00:07:25.803 "method": "nvmf_create_subsystem", 00:07:25.803 "req_id": 1 00:07:25.803 } 00:07:25.803 Got JSON-RPC error response 00:07:25.803 response: 00:07:25.803 { 00:07:25.803 "code": -32603, 00:07:25.803 "message": "Unable to find target foobar" 00:07:25.803 }' 00:07:25.803 21:21:51 -- target/invalid.sh@41 -- # [[ request: 00:07:25.803 { 00:07:25.803 "nqn": "nqn.2016-06.io.spdk:cnode19945", 00:07:25.803 "tgt_name": "foobar", 00:07:25.803 "method": "nvmf_create_subsystem", 00:07:25.803 "req_id": 1 00:07:25.803 } 00:07:25.803 Got JSON-RPC error response 00:07:25.803 response: 00:07:25.803 { 00:07:25.803 "code": -32603, 00:07:25.803 "message": "Unable to find target foobar" 00:07:25.803 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:25.803 21:21:51 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:25.803 21:21:51 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27480 00:07:26.368 [2024-04-24 21:21:51.750041] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27480: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:26.368 21:21:51 -- target/invalid.sh@45 -- # out='request: 00:07:26.368 { 00:07:26.368 "nqn": "nqn.2016-06.io.spdk:cnode27480", 00:07:26.368 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:26.368 "method": "nvmf_create_subsystem", 00:07:26.368 "req_id": 1 00:07:26.368 } 00:07:26.368 Got JSON-RPC error response 00:07:26.368 response: 00:07:26.368 { 00:07:26.368 "code": -32602, 00:07:26.368 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:26.368 }' 00:07:26.368 21:21:51 -- target/invalid.sh@46 -- # [[ request: 00:07:26.368 { 00:07:26.368 "nqn": "nqn.2016-06.io.spdk:cnode27480", 00:07:26.368 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:26.368 "method": "nvmf_create_subsystem", 00:07:26.368 "req_id": 1 00:07:26.368 } 00:07:26.368 Got JSON-RPC error response 00:07:26.368 response: 00:07:26.368 { 00:07:26.368 "code": -32602, 00:07:26.368 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:26.368 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:26.368 21:21:51 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:26.368 21:21:51 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16132 00:07:26.368 [2024-04-24 21:21:52.014862] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16132: invalid model number 'SPDK_Controller' 00:07:26.368 21:21:52 -- target/invalid.sh@50 -- # out='request: 00:07:26.368 { 00:07:26.368 "nqn": "nqn.2016-06.io.spdk:cnode16132", 00:07:26.368 "model_number": "SPDK_Controller\u001f", 00:07:26.368 "method": "nvmf_create_subsystem", 00:07:26.368 "req_id": 1 00:07:26.368 } 00:07:26.368 Got JSON-RPC error response 00:07:26.368 response: 00:07:26.368 { 00:07:26.368 "code": -32602, 00:07:26.368 "message": "Invalid MN SPDK_Controller\u001f" 00:07:26.368 }' 00:07:26.368 21:21:52 -- target/invalid.sh@51 -- # [[ request: 00:07:26.368 { 00:07:26.368 "nqn": "nqn.2016-06.io.spdk:cnode16132", 00:07:26.368 "model_number": "SPDK_Controller\u001f", 00:07:26.368 "method": "nvmf_create_subsystem", 00:07:26.368 "req_id": 1 00:07:26.368 } 00:07:26.368 Got JSON-RPC error response 00:07:26.368 response: 00:07:26.368 { 00:07:26.368 "code": -32602, 00:07:26.368 "message": "Invalid MN SPDK_Controller\u001f" 00:07:26.368 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:26.368 21:21:52 -- target/invalid.sh@54 -- # gen_random_s 21 00:07:26.368 21:21:52 -- target/invalid.sh@19 -- # local length=21 ll 00:07:26.368 21:21:52 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:26.368 21:21:52 -- target/invalid.sh@21 -- # local chars 00:07:26.368 21:21:52 -- target/invalid.sh@22 -- # local string 00:07:26.368 21:21:52 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:26.368 21:21:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.368 21:21:52 -- target/invalid.sh@25 -- # printf %x 60 00:07:26.368 21:21:52 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:07:26.368 21:21:52 -- target/invalid.sh@25 -- # string+='<' 00:07:26.368 21:21:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.368 21:21:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.368 21:21:52 -- target/invalid.sh@25 -- # printf %x 68 00:07:26.368 21:21:52 -- target/invalid.sh@25 -- # echo -e '\x44' 00:07:26.368 21:21:52 -- target/invalid.sh@25 -- # string+=D 00:07:26.368 21:21:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.368 21:21:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.368 21:21:52 -- target/invalid.sh@25 -- # printf %x 116 00:07:26.368 21:21:52 -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:26.368 21:21:52 -- target/invalid.sh@25 -- # string+=t 00:07:26.368 21:21:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.368 21:21:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # printf %x 51 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # string+=3 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # printf %x 81 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # string+=Q 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # printf %x 96 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # string+='`' 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # printf %x 32 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # echo -e '\x20' 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # string+=' ' 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # printf %x 71 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # echo -e '\x47' 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # string+=G 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # printf %x 76 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # string+=L 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # printf %x 113 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # string+=q 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # printf %x 61 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # string+== 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # printf %x 55 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # echo -e '\x37' 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # string+=7 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # printf %x 113 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # string+=q 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # printf %x 36 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # string+='$' 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # printf %x 106 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # string+=j 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # printf %x 121 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # echo -e '\x79' 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # string+=y 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # printf %x 125 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # string+='}' 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # printf %x 94 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # string+='^' 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # printf %x 98 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:26.626 21:21:52 -- target/invalid.sh@25 -- # string+=b 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.626 21:21:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.627 21:21:52 -- target/invalid.sh@25 -- # printf %x 86 00:07:26.627 21:21:52 -- target/invalid.sh@25 -- # echo -e '\x56' 00:07:26.627 21:21:52 -- target/invalid.sh@25 -- # string+=V 00:07:26.627 21:21:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.627 21:21:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.627 21:21:52 -- target/invalid.sh@25 -- # printf %x 87 00:07:26.627 21:21:52 -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:26.627 21:21:52 -- target/invalid.sh@25 -- # string+=W 00:07:26.627 21:21:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.627 21:21:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.627 21:21:52 -- target/invalid.sh@28 -- # [[ < == \- ]] 00:07:26.627 21:21:52 -- target/invalid.sh@31 -- # echo ' /dev/null' 00:07:29.783 21:21:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.689 21:21:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:31.689 00:07:31.689 real 0m8.663s 00:07:31.689 user 0m19.725s 00:07:31.689 sys 0m2.463s 00:07:31.689 21:21:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:31.689 21:21:57 -- common/autotest_common.sh@10 -- # set +x 00:07:31.689 ************************************ 00:07:31.689 END TEST nvmf_invalid 00:07:31.689 ************************************ 00:07:31.689 21:21:57 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:31.689 21:21:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:31.689 21:21:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.689 21:21:57 -- common/autotest_common.sh@10 -- # set +x 00:07:31.689 ************************************ 00:07:31.689 START TEST nvmf_abort 00:07:31.689 ************************************ 00:07:31.689 21:21:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:31.947 * Looking for test storage... 00:07:31.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:31.948 21:21:57 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.948 21:21:57 -- nvmf/common.sh@7 -- # uname -s 00:07:31.948 21:21:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.948 21:21:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.948 21:21:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.948 21:21:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.948 21:21:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.948 21:21:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.948 21:21:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.948 21:21:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.948 21:21:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.948 21:21:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.948 21:21:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:31.948 21:21:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:31.948 21:21:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.948 21:21:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.948 21:21:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.948 21:21:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.948 21:21:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:31.948 21:21:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.948 21:21:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.948 21:21:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.948 21:21:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.948 21:21:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.948 21:21:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.948 21:21:57 -- paths/export.sh@5 -- # export PATH 00:07:31.948 21:21:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.948 21:21:57 -- nvmf/common.sh@47 -- # : 0 00:07:31.948 21:21:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:31.948 21:21:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:31.948 21:21:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.948 21:21:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.948 21:21:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.948 21:21:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:31.948 21:21:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:31.948 21:21:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:31.948 21:21:57 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:31.948 21:21:57 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:31.948 21:21:57 -- target/abort.sh@14 -- # nvmftestinit 00:07:31.948 21:21:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:31.948 21:21:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.948 21:21:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:31.948 21:21:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:31.948 21:21:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:31.948 21:21:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.948 21:21:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.948 21:21:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.948 21:21:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:31.948 21:21:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:31.948 21:21:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:31.948 21:21:57 -- common/autotest_common.sh@10 -- # set +x 00:07:33.853 21:21:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:33.853 21:21:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:33.853 21:21:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:33.853 21:21:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:33.853 21:21:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:33.853 21:21:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:33.853 21:21:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:33.853 21:21:59 -- nvmf/common.sh@295 -- # net_devs=() 00:07:33.853 21:21:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:33.853 21:21:59 -- nvmf/common.sh@296 -- # e810=() 00:07:33.853 21:21:59 -- nvmf/common.sh@296 -- # local -ga e810 00:07:33.853 21:21:59 -- nvmf/common.sh@297 -- # x722=() 00:07:33.853 21:21:59 -- nvmf/common.sh@297 -- # local -ga x722 00:07:33.853 21:21:59 -- nvmf/common.sh@298 -- # mlx=() 00:07:33.853 21:21:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:33.853 21:21:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:33.853 21:21:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:33.853 21:21:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:33.853 21:21:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:33.853 21:21:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:33.853 21:21:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:33.853 21:21:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:33.853 21:21:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:33.853 21:21:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:33.853 21:21:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:33.853 21:21:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:33.853 21:21:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:33.853 21:21:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:33.853 21:21:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:33.853 21:21:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:33.853 21:21:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:33.853 21:21:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:33.853 21:21:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:33.853 21:21:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:33.853 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:33.853 21:21:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:33.853 21:21:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:33.853 21:21:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.853 21:21:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.853 21:21:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:33.853 21:21:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:33.853 21:21:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:33.853 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:33.853 21:21:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:33.853 21:21:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:33.853 21:21:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.853 21:21:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.853 21:21:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:33.853 21:21:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:33.853 21:21:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:33.853 21:21:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:33.853 21:21:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:33.853 21:21:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.853 21:21:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:33.853 21:21:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.853 21:21:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:33.853 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:33.853 21:21:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.853 21:21:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:33.853 21:21:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.853 21:21:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:33.853 21:21:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.853 21:21:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:33.853 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:33.853 21:21:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.853 21:21:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:33.853 21:21:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:33.853 21:21:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:33.853 21:21:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:33.853 21:21:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:33.853 21:21:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:33.853 21:21:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:33.853 21:21:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:33.853 21:21:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:33.853 21:21:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:33.853 21:21:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:33.853 21:21:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:33.853 21:21:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:33.853 21:21:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.853 21:21:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:34.112 21:21:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:34.112 21:21:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:34.112 21:21:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:34.112 21:21:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:34.112 21:21:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:34.112 21:21:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:34.112 21:21:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:34.112 21:21:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:34.112 21:21:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:34.112 21:21:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:34.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:07:34.112 00:07:34.112 --- 10.0.0.2 ping statistics --- 00:07:34.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.112 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:07:34.112 21:21:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:34.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:07:34.112 00:07:34.112 --- 10.0.0.1 ping statistics --- 00:07:34.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.112 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:07:34.112 21:21:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.112 21:21:59 -- nvmf/common.sh@411 -- # return 0 00:07:34.112 21:21:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:34.112 21:21:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.112 21:21:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:34.112 21:21:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:34.112 21:21:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.112 21:21:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:34.112 21:21:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:34.112 21:21:59 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:34.112 21:21:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:34.112 21:21:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:34.112 21:21:59 -- common/autotest_common.sh@10 -- # set +x 00:07:34.112 21:21:59 -- nvmf/common.sh@470 -- # nvmfpid=2527171 00:07:34.112 21:21:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:34.112 21:21:59 -- nvmf/common.sh@471 -- # waitforlisten 2527171 00:07:34.112 21:21:59 -- common/autotest_common.sh@817 -- # '[' -z 2527171 ']' 00:07:34.112 21:21:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.112 21:21:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:34.112 21:21:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.112 21:21:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:34.113 21:21:59 -- common/autotest_common.sh@10 -- # set +x 00:07:34.113 [2024-04-24 21:21:59.733552] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:07:34.113 [2024-04-24 21:21:59.733652] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.113 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.371 [2024-04-24 21:21:59.800010] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.371 [2024-04-24 21:21:59.909837] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.371 [2024-04-24 21:21:59.909900] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.371 [2024-04-24 21:21:59.909929] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.371 [2024-04-24 21:21:59.909941] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.371 [2024-04-24 21:21:59.909951] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.371 [2024-04-24 21:21:59.910085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.371 [2024-04-24 21:21:59.910151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.371 [2024-04-24 21:21:59.910154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.371 21:22:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:34.371 21:22:00 -- common/autotest_common.sh@850 -- # return 0 00:07:34.371 21:22:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:34.371 21:22:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:34.371 21:22:00 -- common/autotest_common.sh@10 -- # set +x 00:07:34.629 21:22:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.629 21:22:00 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:34.629 21:22:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:34.629 21:22:00 -- common/autotest_common.sh@10 -- # set +x 00:07:34.629 [2024-04-24 21:22:00.057762] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.629 21:22:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:34.629 21:22:00 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:34.629 21:22:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:34.629 21:22:00 -- common/autotest_common.sh@10 -- # set +x 00:07:34.629 Malloc0 00:07:34.629 21:22:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:34.629 21:22:00 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:34.629 21:22:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:34.629 21:22:00 -- common/autotest_common.sh@10 -- # set +x 00:07:34.629 Delay0 00:07:34.629 21:22:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:34.629 21:22:00 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:34.629 21:22:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:34.629 21:22:00 -- common/autotest_common.sh@10 -- # set +x 00:07:34.629 21:22:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:34.629 21:22:00 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:34.629 21:22:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:34.629 21:22:00 -- common/autotest_common.sh@10 -- # set +x 00:07:34.629 21:22:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:34.629 21:22:00 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:34.629 21:22:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:34.629 21:22:00 -- common/autotest_common.sh@10 -- # set +x 00:07:34.629 [2024-04-24 21:22:00.123634] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.629 21:22:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:34.629 21:22:00 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:34.629 21:22:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:34.629 21:22:00 -- common/autotest_common.sh@10 -- # set +x 00:07:34.629 21:22:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:34.629 21:22:00 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:34.629 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.629 [2024-04-24 21:22:00.231109] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:37.200 Initializing NVMe Controllers 00:07:37.200 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:37.200 controller IO queue size 128 less than required 00:07:37.200 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:37.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:37.200 Initialization complete. Launching workers. 00:07:37.200 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 33429 00:07:37.200 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33493, failed to submit 62 00:07:37.200 success 33433, unsuccess 60, failed 0 00:07:37.200 21:22:02 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:37.200 21:22:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:37.200 21:22:02 -- common/autotest_common.sh@10 -- # set +x 00:07:37.200 21:22:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:37.200 21:22:02 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:37.200 21:22:02 -- target/abort.sh@38 -- # nvmftestfini 00:07:37.200 21:22:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:37.200 21:22:02 -- nvmf/common.sh@117 -- # sync 00:07:37.200 21:22:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:37.200 21:22:02 -- nvmf/common.sh@120 -- # set +e 00:07:37.200 21:22:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:37.200 21:22:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:37.200 rmmod nvme_tcp 00:07:37.200 rmmod nvme_fabrics 00:07:37.200 rmmod nvme_keyring 00:07:37.200 21:22:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:37.200 21:22:02 -- nvmf/common.sh@124 -- # set -e 00:07:37.200 21:22:02 -- nvmf/common.sh@125 -- # return 0 00:07:37.200 21:22:02 -- nvmf/common.sh@478 -- # '[' -n 2527171 ']' 00:07:37.200 21:22:02 -- nvmf/common.sh@479 -- # killprocess 2527171 00:07:37.200 21:22:02 -- common/autotest_common.sh@936 -- # '[' -z 2527171 ']' 00:07:37.200 21:22:02 -- common/autotest_common.sh@940 -- # kill -0 2527171 00:07:37.200 21:22:02 -- common/autotest_common.sh@941 -- # uname 00:07:37.200 21:22:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:37.200 21:22:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2527171 00:07:37.200 21:22:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:07:37.200 21:22:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:07:37.200 21:22:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2527171' 00:07:37.200 killing process with pid 2527171 00:07:37.200 21:22:02 -- common/autotest_common.sh@955 -- # kill 2527171 00:07:37.200 21:22:02 -- common/autotest_common.sh@960 -- # wait 2527171 00:07:37.200 21:22:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:37.200 21:22:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:37.200 21:22:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:37.200 21:22:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:37.200 21:22:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:37.200 21:22:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.200 21:22:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.200 21:22:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.103 21:22:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:39.103 00:07:39.103 real 0m7.443s 00:07:39.103 user 0m10.553s 00:07:39.103 sys 0m2.657s 00:07:39.103 21:22:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:39.103 21:22:04 -- common/autotest_common.sh@10 -- # set +x 00:07:39.103 ************************************ 00:07:39.103 END TEST nvmf_abort 00:07:39.103 ************************************ 00:07:39.362 21:22:04 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:39.362 21:22:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:39.362 21:22:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.362 21:22:04 -- common/autotest_common.sh@10 -- # set +x 00:07:39.362 ************************************ 00:07:39.362 START TEST nvmf_ns_hotplug_stress 00:07:39.362 ************************************ 00:07:39.362 21:22:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:39.362 * Looking for test storage... 00:07:39.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:39.362 21:22:04 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.362 21:22:04 -- nvmf/common.sh@7 -- # uname -s 00:07:39.362 21:22:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.362 21:22:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.362 21:22:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.362 21:22:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.362 21:22:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.363 21:22:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.363 21:22:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.363 21:22:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.363 21:22:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.363 21:22:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.363 21:22:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:39.363 21:22:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:39.363 21:22:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.363 21:22:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.363 21:22:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.363 21:22:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.363 21:22:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.363 21:22:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.363 21:22:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.363 21:22:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.363 21:22:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.363 21:22:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.363 21:22:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.363 21:22:04 -- paths/export.sh@5 -- # export PATH 00:07:39.363 21:22:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.363 21:22:04 -- nvmf/common.sh@47 -- # : 0 00:07:39.363 21:22:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:39.363 21:22:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:39.363 21:22:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.363 21:22:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.363 21:22:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.363 21:22:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:39.363 21:22:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:39.363 21:22:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:39.363 21:22:04 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.363 21:22:04 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:07:39.363 21:22:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:39.363 21:22:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.363 21:22:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:39.363 21:22:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:39.363 21:22:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:39.363 21:22:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.363 21:22:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.363 21:22:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.363 21:22:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:39.363 21:22:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:39.363 21:22:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:39.363 21:22:04 -- common/autotest_common.sh@10 -- # set +x 00:07:41.894 21:22:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:41.894 21:22:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:41.894 21:22:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:41.894 21:22:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:41.894 21:22:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:41.894 21:22:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:41.894 21:22:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:41.894 21:22:06 -- nvmf/common.sh@295 -- # net_devs=() 00:07:41.894 21:22:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:41.894 21:22:06 -- nvmf/common.sh@296 -- # e810=() 00:07:41.894 21:22:06 -- nvmf/common.sh@296 -- # local -ga e810 00:07:41.894 21:22:06 -- nvmf/common.sh@297 -- # x722=() 00:07:41.894 21:22:06 -- nvmf/common.sh@297 -- # local -ga x722 00:07:41.894 21:22:06 -- nvmf/common.sh@298 -- # mlx=() 00:07:41.894 21:22:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:41.894 21:22:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.894 21:22:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.894 21:22:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.894 21:22:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.894 21:22:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.894 21:22:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.894 21:22:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.894 21:22:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.894 21:22:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.894 21:22:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.894 21:22:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.894 21:22:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:41.894 21:22:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:41.894 21:22:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:41.894 21:22:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:41.894 21:22:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:41.894 21:22:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:41.894 21:22:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.894 21:22:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:41.894 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:41.894 21:22:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.894 21:22:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.894 21:22:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.894 21:22:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.894 21:22:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.894 21:22:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.894 21:22:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:41.894 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:41.894 21:22:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.894 21:22:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.894 21:22:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.894 21:22:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.894 21:22:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.894 21:22:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:41.894 21:22:07 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:41.894 21:22:07 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:41.894 21:22:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.894 21:22:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.894 21:22:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:41.894 21:22:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.894 21:22:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:41.894 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:41.894 21:22:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.894 21:22:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.894 21:22:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.894 21:22:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:41.894 21:22:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.894 21:22:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:41.894 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:41.894 21:22:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.894 21:22:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:41.894 21:22:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:41.894 21:22:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:41.894 21:22:07 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:41.894 21:22:07 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:41.894 21:22:07 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.894 21:22:07 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.894 21:22:07 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.894 21:22:07 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:41.894 21:22:07 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.894 21:22:07 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.894 21:22:07 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:41.894 21:22:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.894 21:22:07 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.894 21:22:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:41.894 21:22:07 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:41.894 21:22:07 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.894 21:22:07 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.894 21:22:07 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.894 21:22:07 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.894 21:22:07 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:41.894 21:22:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.894 21:22:07 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.894 21:22:07 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.894 21:22:07 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:41.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:07:41.894 00:07:41.894 --- 10.0.0.2 ping statistics --- 00:07:41.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.894 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:07:41.894 21:22:07 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:07:41.894 00:07:41.894 --- 10.0.0.1 ping statistics --- 00:07:41.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.894 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:07:41.894 21:22:07 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.894 21:22:07 -- nvmf/common.sh@411 -- # return 0 00:07:41.894 21:22:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:41.894 21:22:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.894 21:22:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:41.894 21:22:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:41.894 21:22:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.894 21:22:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:41.894 21:22:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:41.894 21:22:07 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:07:41.894 21:22:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:41.894 21:22:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:41.894 21:22:07 -- common/autotest_common.sh@10 -- # set +x 00:07:41.894 21:22:07 -- nvmf/common.sh@470 -- # nvmfpid=2529515 00:07:41.894 21:22:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:41.894 21:22:07 -- nvmf/common.sh@471 -- # waitforlisten 2529515 00:07:41.894 21:22:07 -- common/autotest_common.sh@817 -- # '[' -z 2529515 ']' 00:07:41.894 21:22:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.894 21:22:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:41.894 21:22:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.894 21:22:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:41.894 21:22:07 -- common/autotest_common.sh@10 -- # set +x 00:07:41.894 [2024-04-24 21:22:07.219591] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:07:41.894 [2024-04-24 21:22:07.219706] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.894 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.894 [2024-04-24 21:22:07.285174] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.894 [2024-04-24 21:22:07.392416] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.894 [2024-04-24 21:22:07.392473] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.894 [2024-04-24 21:22:07.392501] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.894 [2024-04-24 21:22:07.392512] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.894 [2024-04-24 21:22:07.392522] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.894 [2024-04-24 21:22:07.392588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.894 [2024-04-24 21:22:07.392621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.894 [2024-04-24 21:22:07.392624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.894 21:22:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:41.894 21:22:07 -- common/autotest_common.sh@850 -- # return 0 00:07:41.894 21:22:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:41.894 21:22:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:41.894 21:22:07 -- common/autotest_common.sh@10 -- # set +x 00:07:41.894 21:22:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.894 21:22:07 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:07:41.895 21:22:07 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:42.152 [2024-04-24 21:22:07.801770] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.152 21:22:07 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:42.717 21:22:08 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:42.717 [2024-04-24 21:22:08.388485] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:42.978 21:22:08 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:43.236 21:22:08 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:43.236 Malloc0 00:07:43.493 21:22:08 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:43.493 Delay0 00:07:43.493 21:22:09 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.750 21:22:09 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:44.007 NULL1 00:07:44.007 21:22:09 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:44.265 21:22:09 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=2529820 00:07:44.265 21:22:09 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:44.265 21:22:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:07:44.265 21:22:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.265 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.638 Read completed with error (sct=0, sc=11) 00:07:45.638 21:22:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.895 21:22:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:07:45.895 21:22:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:46.152 true 00:07:46.152 21:22:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:07:46.152 21:22:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.717 21:22:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.974 21:22:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:07:46.974 21:22:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:47.231 true 00:07:47.231 21:22:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:07:47.231 21:22:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.488 21:22:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.746 21:22:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:07:47.746 21:22:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:48.002 true 00:07:48.002 21:22:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:07:48.002 21:22:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.932 21:22:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.189 21:22:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:07:49.189 21:22:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:49.446 true 00:07:49.446 21:22:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:07:49.446 21:22:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.704 21:22:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.962 21:22:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:07:49.962 21:22:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:50.219 true 00:07:50.219 21:22:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:07:50.219 21:22:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.180 21:22:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.437 21:22:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:07:51.437 21:22:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:51.694 true 00:07:51.694 21:22:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:07:51.694 21:22:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.952 21:22:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.209 21:22:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:07:52.209 21:22:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:52.466 true 00:07:52.466 21:22:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:07:52.466 21:22:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.399 21:22:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.399 21:22:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:07:53.399 21:22:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:53.655 true 00:07:53.655 21:22:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:07:53.655 21:22:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.220 21:22:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.220 21:22:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:07:54.220 21:22:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:54.478 true 00:07:54.478 21:22:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:07:54.478 21:22:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.736 21:22:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.992 21:22:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:07:54.992 21:22:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:55.249 true 00:07:55.249 21:22:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:07:55.249 21:22:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.621 21:22:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.621 21:22:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:07:56.621 21:22:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:56.878 true 00:07:56.878 21:22:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:07:56.878 21:22:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.136 21:22:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.394 21:22:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:07:57.394 21:22:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:57.651 true 00:07:57.651 21:22:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:07:57.651 21:22:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.908 21:22:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.165 21:22:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:07:58.165 21:22:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:58.423 true 00:07:58.423 21:22:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:07:58.423 21:22:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.356 21:22:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.614 21:22:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:07:59.614 21:22:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:59.872 true 00:07:59.872 21:22:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:07:59.872 21:22:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.130 21:22:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.388 21:22:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:08:00.388 21:22:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:00.646 true 00:08:00.646 21:22:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:08:00.646 21:22:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.580 21:22:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.837 21:22:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:08:01.837 21:22:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:02.095 true 00:08:02.095 21:22:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:08:02.095 21:22:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.352 21:22:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.610 21:22:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:08:02.610 21:22:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:02.867 true 00:08:02.867 21:22:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:08:02.867 21:22:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.798 21:22:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.056 21:22:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:08:04.056 21:22:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:04.315 true 00:08:04.315 21:22:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:08:04.315 21:22:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.607 21:22:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.864 21:22:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:08:04.864 21:22:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:05.121 true 00:08:05.121 21:22:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:08:05.121 21:22:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.055 21:22:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.313 21:22:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:08:06.313 21:22:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:06.571 true 00:08:06.571 21:22:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:08:06.571 21:22:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.829 21:22:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.086 21:22:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:08:07.086 21:22:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:07.344 true 00:08:07.344 21:22:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:08:07.344 21:22:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.602 21:22:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.859 21:22:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:08:07.859 21:22:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:08.116 true 00:08:08.116 21:22:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:08:08.116 21:22:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.050 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.050 21:22:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.307 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.307 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.307 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.307 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.307 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.307 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.563 21:22:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:08:09.563 21:22:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:09.563 true 00:08:09.564 21:22:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:08:09.564 21:22:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.496 21:22:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.496 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.753 21:22:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:08:10.753 21:22:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:11.011 true 00:08:11.011 21:22:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:08:11.011 21:22:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.269 21:22:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.526 21:22:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:08:11.526 21:22:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:11.788 true 00:08:11.788 21:22:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:08:11.788 21:22:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.723 21:22:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.981 21:22:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:08:12.981 21:22:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:13.242 true 00:08:13.242 21:22:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:08:13.242 21:22:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.242 21:22:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.500 21:22:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:08:13.500 21:22:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:13.757 true 00:08:13.757 21:22:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:08:13.757 21:22:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.690 Initializing NVMe Controllers 00:08:14.690 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:14.690 Controller IO queue size 128, less than required. 00:08:14.690 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:14.690 Controller IO queue size 128, less than required. 00:08:14.690 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:14.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:14.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:14.690 Initialization complete. Launching workers. 00:08:14.690 ======================================================== 00:08:14.690 Latency(us) 00:08:14.690 Device Information : IOPS MiB/s Average min max 00:08:14.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 900.79 0.44 74650.10 3199.66 1013204.07 00:08:14.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10743.86 5.25 11914.08 2703.36 447392.39 00:08:14.690 ======================================================== 00:08:14.690 Total : 11644.65 5.69 16767.11 2703.36 1013204.07 00:08:14.690 00:08:14.690 21:22:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.947 21:22:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:08:14.947 21:22:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:15.205 true 00:08:15.205 21:22:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2529820 00:08:15.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (2529820) - No such process 00:08:15.205 21:22:40 -- target/ns_hotplug_stress.sh@44 -- # wait 2529820 00:08:15.205 21:22:40 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:08:15.205 21:22:40 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:08:15.205 21:22:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:15.205 21:22:40 -- nvmf/common.sh@117 -- # sync 00:08:15.205 21:22:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:15.205 21:22:40 -- nvmf/common.sh@120 -- # set +e 00:08:15.205 21:22:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:15.205 21:22:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:15.205 rmmod nvme_tcp 00:08:15.205 rmmod nvme_fabrics 00:08:15.205 rmmod nvme_keyring 00:08:15.205 21:22:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:15.205 21:22:40 -- nvmf/common.sh@124 -- # set -e 00:08:15.205 21:22:40 -- nvmf/common.sh@125 -- # return 0 00:08:15.205 21:22:40 -- nvmf/common.sh@478 -- # '[' -n 2529515 ']' 00:08:15.205 21:22:40 -- nvmf/common.sh@479 -- # killprocess 2529515 00:08:15.205 21:22:40 -- common/autotest_common.sh@936 -- # '[' -z 2529515 ']' 00:08:15.205 21:22:40 -- common/autotest_common.sh@940 -- # kill -0 2529515 00:08:15.205 21:22:40 -- common/autotest_common.sh@941 -- # uname 00:08:15.205 21:22:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:15.205 21:22:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2529515 00:08:15.205 21:22:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:15.205 21:22:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:15.205 21:22:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2529515' 00:08:15.205 killing process with pid 2529515 00:08:15.205 21:22:40 -- common/autotest_common.sh@955 -- # kill 2529515 00:08:15.205 21:22:40 -- common/autotest_common.sh@960 -- # wait 2529515 00:08:15.770 21:22:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:15.770 21:22:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:15.770 21:22:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:15.770 21:22:41 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:15.770 21:22:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:15.770 21:22:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.770 21:22:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.770 21:22:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.674 21:22:43 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:17.674 00:08:17.674 real 0m38.317s 00:08:17.674 user 2m28.504s 00:08:17.674 sys 0m10.057s 00:08:17.674 21:22:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:17.674 21:22:43 -- common/autotest_common.sh@10 -- # set +x 00:08:17.674 ************************************ 00:08:17.674 END TEST nvmf_ns_hotplug_stress 00:08:17.674 ************************************ 00:08:17.674 21:22:43 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:17.674 21:22:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:17.674 21:22:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.674 21:22:43 -- common/autotest_common.sh@10 -- # set +x 00:08:17.674 ************************************ 00:08:17.674 START TEST nvmf_connect_stress 00:08:17.674 ************************************ 00:08:17.674 21:22:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:17.932 * Looking for test storage... 00:08:17.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.932 21:22:43 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.932 21:22:43 -- nvmf/common.sh@7 -- # uname -s 00:08:17.932 21:22:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.932 21:22:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.932 21:22:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.932 21:22:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.932 21:22:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.932 21:22:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.932 21:22:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.932 21:22:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.932 21:22:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.932 21:22:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.932 21:22:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:17.932 21:22:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:17.932 21:22:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.932 21:22:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.932 21:22:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.932 21:22:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.932 21:22:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.932 21:22:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.932 21:22:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.932 21:22:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.932 21:22:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.932 21:22:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.932 21:22:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.932 21:22:43 -- paths/export.sh@5 -- # export PATH 00:08:17.932 21:22:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.932 21:22:43 -- nvmf/common.sh@47 -- # : 0 00:08:17.932 21:22:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:17.932 21:22:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:17.932 21:22:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.932 21:22:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.932 21:22:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.932 21:22:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:17.932 21:22:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:17.932 21:22:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:17.932 21:22:43 -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:17.932 21:22:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:17.933 21:22:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.933 21:22:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:17.933 21:22:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:17.933 21:22:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:17.933 21:22:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.933 21:22:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.933 21:22:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.933 21:22:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:17.933 21:22:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:17.933 21:22:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:17.933 21:22:43 -- common/autotest_common.sh@10 -- # set +x 00:08:19.881 21:22:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:19.881 21:22:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:19.881 21:22:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:19.881 21:22:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:19.881 21:22:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:19.881 21:22:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:19.881 21:22:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:19.881 21:22:45 -- nvmf/common.sh@295 -- # net_devs=() 00:08:19.881 21:22:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:19.881 21:22:45 -- nvmf/common.sh@296 -- # e810=() 00:08:19.881 21:22:45 -- nvmf/common.sh@296 -- # local -ga e810 00:08:19.881 21:22:45 -- nvmf/common.sh@297 -- # x722=() 00:08:19.881 21:22:45 -- nvmf/common.sh@297 -- # local -ga x722 00:08:19.881 21:22:45 -- nvmf/common.sh@298 -- # mlx=() 00:08:19.881 21:22:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:19.881 21:22:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.881 21:22:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.881 21:22:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.881 21:22:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.881 21:22:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.881 21:22:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.882 21:22:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.882 21:22:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.882 21:22:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.882 21:22:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.882 21:22:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.882 21:22:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:19.882 21:22:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:19.882 21:22:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:19.882 21:22:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:19.882 21:22:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:19.882 21:22:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:19.882 21:22:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.882 21:22:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:19.882 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:19.882 21:22:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.882 21:22:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.882 21:22:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.882 21:22:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.882 21:22:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.882 21:22:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.882 21:22:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:19.882 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:19.882 21:22:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.882 21:22:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.882 21:22:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.882 21:22:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.882 21:22:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.882 21:22:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:19.882 21:22:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:19.882 21:22:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:19.882 21:22:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.882 21:22:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.882 21:22:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:19.882 21:22:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.882 21:22:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:19.882 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:19.882 21:22:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.882 21:22:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.882 21:22:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.882 21:22:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:19.882 21:22:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.882 21:22:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:19.882 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:19.882 21:22:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.882 21:22:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:19.882 21:22:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:19.882 21:22:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:19.882 21:22:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:19.882 21:22:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:19.882 21:22:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.882 21:22:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.882 21:22:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.882 21:22:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:19.882 21:22:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.882 21:22:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.882 21:22:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:19.882 21:22:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.882 21:22:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.882 21:22:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:19.882 21:22:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:19.882 21:22:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.882 21:22:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.882 21:22:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.882 21:22:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.882 21:22:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:19.882 21:22:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.882 21:22:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.882 21:22:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.882 21:22:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:19.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:08:19.882 00:08:19.882 --- 10.0.0.2 ping statistics --- 00:08:19.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.882 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:08:19.882 21:22:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:08:19.882 00:08:19.882 --- 10.0.0.1 ping statistics --- 00:08:19.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.882 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:08:19.882 21:22:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.882 21:22:45 -- nvmf/common.sh@411 -- # return 0 00:08:19.882 21:22:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:19.882 21:22:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.882 21:22:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:19.882 21:22:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:19.882 21:22:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.882 21:22:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:19.882 21:22:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:19.882 21:22:45 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:19.882 21:22:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:19.882 21:22:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:19.882 21:22:45 -- common/autotest_common.sh@10 -- # set +x 00:08:19.882 21:22:45 -- nvmf/common.sh@470 -- # nvmfpid=2535543 00:08:19.882 21:22:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:19.882 21:22:45 -- nvmf/common.sh@471 -- # waitforlisten 2535543 00:08:19.882 21:22:45 -- common/autotest_common.sh@817 -- # '[' -z 2535543 ']' 00:08:19.882 21:22:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.882 21:22:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:19.882 21:22:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.882 21:22:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:19.882 21:22:45 -- common/autotest_common.sh@10 -- # set +x 00:08:20.141 [2024-04-24 21:22:45.569490] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:20.141 [2024-04-24 21:22:45.569563] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.141 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.141 [2024-04-24 21:22:45.637157] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:20.141 [2024-04-24 21:22:45.745873] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.141 [2024-04-24 21:22:45.745944] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.141 [2024-04-24 21:22:45.745973] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.141 [2024-04-24 21:22:45.745985] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.141 [2024-04-24 21:22:45.745995] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.141 [2024-04-24 21:22:45.749650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.141 [2024-04-24 21:22:45.749723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.141 [2024-04-24 21:22:45.753644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.400 21:22:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:20.400 21:22:45 -- common/autotest_common.sh@850 -- # return 0 00:08:20.400 21:22:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:20.400 21:22:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:20.400 21:22:45 -- common/autotest_common.sh@10 -- # set +x 00:08:20.400 21:22:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.400 21:22:45 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:20.400 21:22:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.400 21:22:45 -- common/autotest_common.sh@10 -- # set +x 00:08:20.400 [2024-04-24 21:22:45.901831] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.400 21:22:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.400 21:22:45 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:20.400 21:22:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.400 21:22:45 -- common/autotest_common.sh@10 -- # set +x 00:08:20.400 21:22:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.400 21:22:45 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:20.400 21:22:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.400 21:22:45 -- common/autotest_common.sh@10 -- # set +x 00:08:20.400 [2024-04-24 21:22:45.935799] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.400 21:22:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.400 21:22:45 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:20.400 21:22:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.400 21:22:45 -- common/autotest_common.sh@10 -- # set +x 00:08:20.400 NULL1 00:08:20.400 21:22:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.400 21:22:45 -- target/connect_stress.sh@21 -- # PERF_PID=2535570 00:08:20.401 21:22:45 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:20.401 21:22:45 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:20.401 21:22:45 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:20.401 21:22:45 -- target/connect_stress.sh@27 -- # seq 1 20 00:08:20.401 21:22:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.401 21:22:45 -- target/connect_stress.sh@28 -- # cat 00:08:20.401 21:22:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.401 21:22:45 -- target/connect_stress.sh@28 -- # cat 00:08:20.401 21:22:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.401 21:22:45 -- target/connect_stress.sh@28 -- # cat 00:08:20.401 21:22:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.401 21:22:45 -- target/connect_stress.sh@28 -- # cat 00:08:20.401 21:22:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.401 21:22:45 -- target/connect_stress.sh@28 -- # cat 00:08:20.401 21:22:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.401 21:22:45 -- target/connect_stress.sh@28 -- # cat 00:08:20.401 21:22:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.401 21:22:45 -- target/connect_stress.sh@28 -- # cat 00:08:20.401 21:22:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.401 21:22:45 -- target/connect_stress.sh@28 -- # cat 00:08:20.401 21:22:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.401 21:22:45 -- target/connect_stress.sh@28 -- # cat 00:08:20.401 21:22:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.401 21:22:45 -- target/connect_stress.sh@28 -- # cat 00:08:20.401 21:22:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.401 21:22:45 -- target/connect_stress.sh@28 -- # cat 00:08:20.401 21:22:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.401 21:22:45 -- target/connect_stress.sh@28 -- # cat 00:08:20.401 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.401 21:22:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.401 21:22:45 -- target/connect_stress.sh@28 -- # cat 00:08:20.401 21:22:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.401 21:22:45 -- target/connect_stress.sh@28 -- # cat 00:08:20.401 21:22:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.401 21:22:45 -- target/connect_stress.sh@28 -- # cat 00:08:20.401 21:22:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.401 21:22:45 -- target/connect_stress.sh@28 -- # cat 00:08:20.401 21:22:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.401 21:22:45 -- target/connect_stress.sh@28 -- # cat 00:08:20.401 21:22:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.401 21:22:45 -- target/connect_stress.sh@28 -- # cat 00:08:20.401 21:22:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.401 21:22:45 -- target/connect_stress.sh@28 -- # cat 00:08:20.401 21:22:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.401 21:22:45 -- target/connect_stress.sh@28 -- # cat 00:08:20.401 21:22:45 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:20.401 21:22:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:20.401 21:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.401 21:22:46 -- common/autotest_common.sh@10 -- # set +x 00:08:20.658 21:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.658 21:22:46 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:20.658 21:22:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:20.658 21:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.658 21:22:46 -- common/autotest_common.sh@10 -- # set +x 00:08:21.225 21:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.225 21:22:46 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:21.225 21:22:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:21.225 21:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.225 21:22:46 -- common/autotest_common.sh@10 -- # set +x 00:08:21.483 21:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.483 21:22:46 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:21.483 21:22:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:21.483 21:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.483 21:22:46 -- common/autotest_common.sh@10 -- # set +x 00:08:21.741 21:22:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.741 21:22:47 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:21.741 21:22:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:21.741 21:22:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.741 21:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:21.999 21:22:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.999 21:22:47 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:21.999 21:22:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:21.999 21:22:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.999 21:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:22.254 21:22:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.254 21:22:47 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:22.254 21:22:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:22.254 21:22:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.254 21:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:22.818 21:22:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.818 21:22:48 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:22.819 21:22:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:22.819 21:22:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.819 21:22:48 -- common/autotest_common.sh@10 -- # set +x 00:08:23.076 21:22:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.076 21:22:48 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:23.076 21:22:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:23.076 21:22:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.076 21:22:48 -- common/autotest_common.sh@10 -- # set +x 00:08:23.334 21:22:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.334 21:22:48 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:23.334 21:22:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:23.334 21:22:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.334 21:22:48 -- common/autotest_common.sh@10 -- # set +x 00:08:23.591 21:22:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.591 21:22:49 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:23.591 21:22:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:23.591 21:22:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.591 21:22:49 -- common/autotest_common.sh@10 -- # set +x 00:08:24.155 21:22:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.155 21:22:49 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:24.155 21:22:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.155 21:22:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.155 21:22:49 -- common/autotest_common.sh@10 -- # set +x 00:08:24.413 21:22:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.413 21:22:49 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:24.413 21:22:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.413 21:22:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.413 21:22:49 -- common/autotest_common.sh@10 -- # set +x 00:08:24.670 21:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.670 21:22:50 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:24.670 21:22:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.670 21:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.670 21:22:50 -- common/autotest_common.sh@10 -- # set +x 00:08:24.928 21:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.928 21:22:50 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:24.928 21:22:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.928 21:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.928 21:22:50 -- common/autotest_common.sh@10 -- # set +x 00:08:25.186 21:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:25.186 21:22:50 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:25.186 21:22:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.186 21:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:25.186 21:22:50 -- common/autotest_common.sh@10 -- # set +x 00:08:25.751 21:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:25.751 21:22:51 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:25.751 21:22:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.751 21:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:25.751 21:22:51 -- common/autotest_common.sh@10 -- # set +x 00:08:26.009 21:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:26.009 21:22:51 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:26.009 21:22:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.009 21:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:26.009 21:22:51 -- common/autotest_common.sh@10 -- # set +x 00:08:26.266 21:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:26.266 21:22:51 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:26.266 21:22:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.266 21:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:26.266 21:22:51 -- common/autotest_common.sh@10 -- # set +x 00:08:26.524 21:22:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:26.524 21:22:52 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:26.524 21:22:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.524 21:22:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:26.524 21:22:52 -- common/autotest_common.sh@10 -- # set +x 00:08:26.782 21:22:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:26.782 21:22:52 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:26.782 21:22:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.782 21:22:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:26.782 21:22:52 -- common/autotest_common.sh@10 -- # set +x 00:08:27.347 21:22:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.347 21:22:52 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:27.347 21:22:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.347 21:22:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.347 21:22:52 -- common/autotest_common.sh@10 -- # set +x 00:08:27.604 21:22:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.604 21:22:53 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:27.604 21:22:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.604 21:22:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.604 21:22:53 -- common/autotest_common.sh@10 -- # set +x 00:08:27.862 21:22:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.862 21:22:53 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:27.862 21:22:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.862 21:22:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.862 21:22:53 -- common/autotest_common.sh@10 -- # set +x 00:08:28.119 21:22:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:28.119 21:22:53 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:28.119 21:22:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.119 21:22:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:28.119 21:22:53 -- common/autotest_common.sh@10 -- # set +x 00:08:28.377 21:22:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:28.377 21:22:54 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:28.377 21:22:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.377 21:22:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:28.377 21:22:54 -- common/autotest_common.sh@10 -- # set +x 00:08:28.943 21:22:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:28.943 21:22:54 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:28.943 21:22:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.943 21:22:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:28.943 21:22:54 -- common/autotest_common.sh@10 -- # set +x 00:08:29.201 21:22:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.201 21:22:54 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:29.201 21:22:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.201 21:22:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.201 21:22:54 -- common/autotest_common.sh@10 -- # set +x 00:08:29.459 21:22:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.459 21:22:54 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:29.459 21:22:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.459 21:22:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.459 21:22:54 -- common/autotest_common.sh@10 -- # set +x 00:08:29.717 21:22:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.717 21:22:55 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:29.717 21:22:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.717 21:22:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.717 21:22:55 -- common/autotest_common.sh@10 -- # set +x 00:08:29.976 21:22:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.976 21:22:55 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:29.976 21:22:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.976 21:22:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.976 21:22:55 -- common/autotest_common.sh@10 -- # set +x 00:08:30.541 21:22:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.541 21:22:55 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:30.541 21:22:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.541 21:22:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.541 21:22:55 -- common/autotest_common.sh@10 -- # set +x 00:08:30.541 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:30.799 21:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.799 21:22:56 -- target/connect_stress.sh@34 -- # kill -0 2535570 00:08:30.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2535570) - No such process 00:08:30.799 21:22:56 -- target/connect_stress.sh@38 -- # wait 2535570 00:08:30.799 21:22:56 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:30.799 21:22:56 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:30.799 21:22:56 -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:30.799 21:22:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:30.799 21:22:56 -- nvmf/common.sh@117 -- # sync 00:08:30.799 21:22:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:30.799 21:22:56 -- nvmf/common.sh@120 -- # set +e 00:08:30.800 21:22:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.800 21:22:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:30.800 rmmod nvme_tcp 00:08:30.800 rmmod nvme_fabrics 00:08:30.800 rmmod nvme_keyring 00:08:30.800 21:22:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.800 21:22:56 -- nvmf/common.sh@124 -- # set -e 00:08:30.800 21:22:56 -- nvmf/common.sh@125 -- # return 0 00:08:30.800 21:22:56 -- nvmf/common.sh@478 -- # '[' -n 2535543 ']' 00:08:30.800 21:22:56 -- nvmf/common.sh@479 -- # killprocess 2535543 00:08:30.800 21:22:56 -- common/autotest_common.sh@936 -- # '[' -z 2535543 ']' 00:08:30.800 21:22:56 -- common/autotest_common.sh@940 -- # kill -0 2535543 00:08:30.800 21:22:56 -- common/autotest_common.sh@941 -- # uname 00:08:30.800 21:22:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:30.800 21:22:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2535543 00:08:30.800 21:22:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:30.800 21:22:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:30.800 21:22:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2535543' 00:08:30.800 killing process with pid 2535543 00:08:30.800 21:22:56 -- common/autotest_common.sh@955 -- # kill 2535543 00:08:30.800 21:22:56 -- common/autotest_common.sh@960 -- # wait 2535543 00:08:31.058 21:22:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:31.058 21:22:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:31.058 21:22:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:31.058 21:22:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:31.058 21:22:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:31.058 21:22:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.058 21:22:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.058 21:22:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.986 21:22:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:33.245 00:08:33.245 real 0m15.346s 00:08:33.245 user 0m38.118s 00:08:33.245 sys 0m6.192s 00:08:33.245 21:22:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:33.245 21:22:58 -- common/autotest_common.sh@10 -- # set +x 00:08:33.245 ************************************ 00:08:33.245 END TEST nvmf_connect_stress 00:08:33.245 ************************************ 00:08:33.245 21:22:58 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:33.245 21:22:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:33.245 21:22:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.245 21:22:58 -- common/autotest_common.sh@10 -- # set +x 00:08:33.245 ************************************ 00:08:33.245 START TEST nvmf_fused_ordering 00:08:33.245 ************************************ 00:08:33.245 21:22:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:33.245 * Looking for test storage... 00:08:33.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:33.245 21:22:58 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.245 21:22:58 -- nvmf/common.sh@7 -- # uname -s 00:08:33.245 21:22:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.245 21:22:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.245 21:22:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.245 21:22:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.245 21:22:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.245 21:22:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.245 21:22:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.245 21:22:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.245 21:22:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.245 21:22:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.245 21:22:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:33.245 21:22:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:33.245 21:22:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.245 21:22:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.245 21:22:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.245 21:22:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.245 21:22:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:33.245 21:22:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.245 21:22:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.245 21:22:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.245 21:22:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.245 21:22:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.245 21:22:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.245 21:22:58 -- paths/export.sh@5 -- # export PATH 00:08:33.245 21:22:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.245 21:22:58 -- nvmf/common.sh@47 -- # : 0 00:08:33.245 21:22:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.245 21:22:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.245 21:22:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.245 21:22:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.245 21:22:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.245 21:22:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.245 21:22:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.245 21:22:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.245 21:22:58 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:33.245 21:22:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:33.245 21:22:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.245 21:22:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:33.245 21:22:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:33.245 21:22:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:33.245 21:22:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.245 21:22:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.245 21:22:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.245 21:22:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:33.245 21:22:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:33.245 21:22:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:33.245 21:22:58 -- common/autotest_common.sh@10 -- # set +x 00:08:35.777 21:23:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:35.777 21:23:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:35.777 21:23:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:35.777 21:23:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:35.777 21:23:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:35.777 21:23:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:35.777 21:23:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:35.777 21:23:00 -- nvmf/common.sh@295 -- # net_devs=() 00:08:35.777 21:23:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:35.778 21:23:00 -- nvmf/common.sh@296 -- # e810=() 00:08:35.778 21:23:00 -- nvmf/common.sh@296 -- # local -ga e810 00:08:35.778 21:23:00 -- nvmf/common.sh@297 -- # x722=() 00:08:35.778 21:23:00 -- nvmf/common.sh@297 -- # local -ga x722 00:08:35.778 21:23:00 -- nvmf/common.sh@298 -- # mlx=() 00:08:35.778 21:23:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:35.778 21:23:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:35.778 21:23:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:35.778 21:23:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:35.778 21:23:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:35.778 21:23:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:35.778 21:23:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:35.778 21:23:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:35.778 21:23:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:35.778 21:23:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:35.778 21:23:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:35.778 21:23:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:35.778 21:23:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:35.778 21:23:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:35.778 21:23:00 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:35.778 21:23:00 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:35.778 21:23:00 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:35.778 21:23:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:35.778 21:23:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:35.778 21:23:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:35.778 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:35.778 21:23:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:35.778 21:23:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:35.778 21:23:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.778 21:23:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.778 21:23:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:35.778 21:23:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:35.778 21:23:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:35.778 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:35.778 21:23:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:35.778 21:23:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:35.778 21:23:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.778 21:23:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.778 21:23:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:35.778 21:23:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:35.778 21:23:00 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:35.778 21:23:00 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:35.778 21:23:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:35.778 21:23:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.778 21:23:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:35.778 21:23:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.778 21:23:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:35.778 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:35.778 21:23:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.778 21:23:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:35.778 21:23:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.778 21:23:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:35.778 21:23:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.778 21:23:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:35.778 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:35.778 21:23:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.778 21:23:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:35.778 21:23:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:35.778 21:23:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:35.778 21:23:00 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:35.778 21:23:00 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:35.778 21:23:00 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.778 21:23:00 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.778 21:23:00 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:35.778 21:23:00 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:35.778 21:23:00 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:35.778 21:23:00 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:35.778 21:23:00 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:35.778 21:23:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:35.778 21:23:00 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.778 21:23:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:35.778 21:23:00 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:35.778 21:23:00 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:35.778 21:23:00 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:35.778 21:23:00 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:35.778 21:23:00 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:35.778 21:23:00 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:35.778 21:23:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:35.778 21:23:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:35.778 21:23:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:35.778 21:23:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:35.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:08:35.778 00:08:35.778 --- 10.0.0.2 ping statistics --- 00:08:35.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.778 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:08:35.778 21:23:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:35.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:08:35.778 00:08:35.778 --- 10.0.0.1 ping statistics --- 00:08:35.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.778 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:08:35.778 21:23:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.778 21:23:01 -- nvmf/common.sh@411 -- # return 0 00:08:35.778 21:23:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:35.778 21:23:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.778 21:23:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:35.778 21:23:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:35.778 21:23:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.778 21:23:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:35.778 21:23:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:35.778 21:23:01 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:35.778 21:23:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:35.778 21:23:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:35.778 21:23:01 -- common/autotest_common.sh@10 -- # set +x 00:08:35.778 21:23:01 -- nvmf/common.sh@470 -- # nvmfpid=2538845 00:08:35.778 21:23:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:35.778 21:23:01 -- nvmf/common.sh@471 -- # waitforlisten 2538845 00:08:35.778 21:23:01 -- common/autotest_common.sh@817 -- # '[' -z 2538845 ']' 00:08:35.778 21:23:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.778 21:23:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:35.778 21:23:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.778 21:23:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:35.778 21:23:01 -- common/autotest_common.sh@10 -- # set +x 00:08:35.778 [2024-04-24 21:23:01.106131] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:35.778 [2024-04-24 21:23:01.106213] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.778 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.778 [2024-04-24 21:23:01.179853] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.778 [2024-04-24 21:23:01.298180] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.778 [2024-04-24 21:23:01.298250] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.778 [2024-04-24 21:23:01.298266] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.778 [2024-04-24 21:23:01.298279] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.778 [2024-04-24 21:23:01.298291] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.778 [2024-04-24 21:23:01.298321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.713 21:23:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:36.713 21:23:02 -- common/autotest_common.sh@850 -- # return 0 00:08:36.713 21:23:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:36.713 21:23:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:36.713 21:23:02 -- common/autotest_common.sh@10 -- # set +x 00:08:36.713 21:23:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.713 21:23:02 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:36.713 21:23:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.713 21:23:02 -- common/autotest_common.sh@10 -- # set +x 00:08:36.713 [2024-04-24 21:23:02.102947] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.713 21:23:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.713 21:23:02 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:36.713 21:23:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.713 21:23:02 -- common/autotest_common.sh@10 -- # set +x 00:08:36.713 21:23:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.713 21:23:02 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.713 21:23:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.713 21:23:02 -- common/autotest_common.sh@10 -- # set +x 00:08:36.713 [2024-04-24 21:23:02.119125] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.713 21:23:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.713 21:23:02 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:36.713 21:23:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.713 21:23:02 -- common/autotest_common.sh@10 -- # set +x 00:08:36.713 NULL1 00:08:36.713 21:23:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.713 21:23:02 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:36.714 21:23:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.714 21:23:02 -- common/autotest_common.sh@10 -- # set +x 00:08:36.714 21:23:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.714 21:23:02 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:36.714 21:23:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.714 21:23:02 -- common/autotest_common.sh@10 -- # set +x 00:08:36.714 21:23:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.714 21:23:02 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:36.714 [2024-04-24 21:23:02.165792] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:36.714 [2024-04-24 21:23:02.165833] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2538997 ] 00:08:36.714 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.279 Attached to nqn.2016-06.io.spdk:cnode1 00:08:37.279 Namespace ID: 1 size: 1GB 00:08:37.279 fused_ordering(0) 00:08:37.279 fused_ordering(1) 00:08:37.279 fused_ordering(2) 00:08:37.279 fused_ordering(3) 00:08:37.279 fused_ordering(4) 00:08:37.279 fused_ordering(5) 00:08:37.279 fused_ordering(6) 00:08:37.279 fused_ordering(7) 00:08:37.279 fused_ordering(8) 00:08:37.279 fused_ordering(9) 00:08:37.279 fused_ordering(10) 00:08:37.279 fused_ordering(11) 00:08:37.279 fused_ordering(12) 00:08:37.279 fused_ordering(13) 00:08:37.279 fused_ordering(14) 00:08:37.279 fused_ordering(15) 00:08:37.279 fused_ordering(16) 00:08:37.279 fused_ordering(17) 00:08:37.279 fused_ordering(18) 00:08:37.279 fused_ordering(19) 00:08:37.279 fused_ordering(20) 00:08:37.279 fused_ordering(21) 00:08:37.279 fused_ordering(22) 00:08:37.279 fused_ordering(23) 00:08:37.279 fused_ordering(24) 00:08:37.279 fused_ordering(25) 00:08:37.279 fused_ordering(26) 00:08:37.279 fused_ordering(27) 00:08:37.279 fused_ordering(28) 00:08:37.279 fused_ordering(29) 00:08:37.279 fused_ordering(30) 00:08:37.279 fused_ordering(31) 00:08:37.279 fused_ordering(32) 00:08:37.279 fused_ordering(33) 00:08:37.279 fused_ordering(34) 00:08:37.279 fused_ordering(35) 00:08:37.279 fused_ordering(36) 00:08:37.279 fused_ordering(37) 00:08:37.279 fused_ordering(38) 00:08:37.279 fused_ordering(39) 00:08:37.279 fused_ordering(40) 00:08:37.279 fused_ordering(41) 00:08:37.279 fused_ordering(42) 00:08:37.279 fused_ordering(43) 00:08:37.279 fused_ordering(44) 00:08:37.279 fused_ordering(45) 00:08:37.279 fused_ordering(46) 00:08:37.279 fused_ordering(47) 00:08:37.279 fused_ordering(48) 00:08:37.279 fused_ordering(49) 00:08:37.279 fused_ordering(50) 00:08:37.279 fused_ordering(51) 00:08:37.279 fused_ordering(52) 00:08:37.279 fused_ordering(53) 00:08:37.279 fused_ordering(54) 00:08:37.279 fused_ordering(55) 00:08:37.279 fused_ordering(56) 00:08:37.279 fused_ordering(57) 00:08:37.279 fused_ordering(58) 00:08:37.279 fused_ordering(59) 00:08:37.279 fused_ordering(60) 00:08:37.279 fused_ordering(61) 00:08:37.279 fused_ordering(62) 00:08:37.279 fused_ordering(63) 00:08:37.279 fused_ordering(64) 00:08:37.279 fused_ordering(65) 00:08:37.279 fused_ordering(66) 00:08:37.279 fused_ordering(67) 00:08:37.279 fused_ordering(68) 00:08:37.279 fused_ordering(69) 00:08:37.279 fused_ordering(70) 00:08:37.279 fused_ordering(71) 00:08:37.279 fused_ordering(72) 00:08:37.279 fused_ordering(73) 00:08:37.279 fused_ordering(74) 00:08:37.279 fused_ordering(75) 00:08:37.279 fused_ordering(76) 00:08:37.279 fused_ordering(77) 00:08:37.279 fused_ordering(78) 00:08:37.279 fused_ordering(79) 00:08:37.279 fused_ordering(80) 00:08:37.279 fused_ordering(81) 00:08:37.279 fused_ordering(82) 00:08:37.279 fused_ordering(83) 00:08:37.279 fused_ordering(84) 00:08:37.279 fused_ordering(85) 00:08:37.279 fused_ordering(86) 00:08:37.279 fused_ordering(87) 00:08:37.279 fused_ordering(88) 00:08:37.279 fused_ordering(89) 00:08:37.279 fused_ordering(90) 00:08:37.279 fused_ordering(91) 00:08:37.279 fused_ordering(92) 00:08:37.279 fused_ordering(93) 00:08:37.279 fused_ordering(94) 00:08:37.279 fused_ordering(95) 00:08:37.279 fused_ordering(96) 00:08:37.279 fused_ordering(97) 00:08:37.279 fused_ordering(98) 00:08:37.279 fused_ordering(99) 00:08:37.279 fused_ordering(100) 00:08:37.279 fused_ordering(101) 00:08:37.279 fused_ordering(102) 00:08:37.279 fused_ordering(103) 00:08:37.279 fused_ordering(104) 00:08:37.279 fused_ordering(105) 00:08:37.279 fused_ordering(106) 00:08:37.279 fused_ordering(107) 00:08:37.279 fused_ordering(108) 00:08:37.279 fused_ordering(109) 00:08:37.279 fused_ordering(110) 00:08:37.279 fused_ordering(111) 00:08:37.280 fused_ordering(112) 00:08:37.280 fused_ordering(113) 00:08:37.280 fused_ordering(114) 00:08:37.280 fused_ordering(115) 00:08:37.280 fused_ordering(116) 00:08:37.280 fused_ordering(117) 00:08:37.280 fused_ordering(118) 00:08:37.280 fused_ordering(119) 00:08:37.280 fused_ordering(120) 00:08:37.280 fused_ordering(121) 00:08:37.280 fused_ordering(122) 00:08:37.280 fused_ordering(123) 00:08:37.280 fused_ordering(124) 00:08:37.280 fused_ordering(125) 00:08:37.280 fused_ordering(126) 00:08:37.280 fused_ordering(127) 00:08:37.280 fused_ordering(128) 00:08:37.280 fused_ordering(129) 00:08:37.280 fused_ordering(130) 00:08:37.280 fused_ordering(131) 00:08:37.280 fused_ordering(132) 00:08:37.280 fused_ordering(133) 00:08:37.280 fused_ordering(134) 00:08:37.280 fused_ordering(135) 00:08:37.280 fused_ordering(136) 00:08:37.280 fused_ordering(137) 00:08:37.280 fused_ordering(138) 00:08:37.280 fused_ordering(139) 00:08:37.280 fused_ordering(140) 00:08:37.280 fused_ordering(141) 00:08:37.280 fused_ordering(142) 00:08:37.280 fused_ordering(143) 00:08:37.280 fused_ordering(144) 00:08:37.280 fused_ordering(145) 00:08:37.280 fused_ordering(146) 00:08:37.280 fused_ordering(147) 00:08:37.280 fused_ordering(148) 00:08:37.280 fused_ordering(149) 00:08:37.280 fused_ordering(150) 00:08:37.280 fused_ordering(151) 00:08:37.280 fused_ordering(152) 00:08:37.280 fused_ordering(153) 00:08:37.280 fused_ordering(154) 00:08:37.280 fused_ordering(155) 00:08:37.280 fused_ordering(156) 00:08:37.280 fused_ordering(157) 00:08:37.280 fused_ordering(158) 00:08:37.280 fused_ordering(159) 00:08:37.280 fused_ordering(160) 00:08:37.280 fused_ordering(161) 00:08:37.280 fused_ordering(162) 00:08:37.280 fused_ordering(163) 00:08:37.280 fused_ordering(164) 00:08:37.280 fused_ordering(165) 00:08:37.280 fused_ordering(166) 00:08:37.280 fused_ordering(167) 00:08:37.280 fused_ordering(168) 00:08:37.280 fused_ordering(169) 00:08:37.280 fused_ordering(170) 00:08:37.280 fused_ordering(171) 00:08:37.280 fused_ordering(172) 00:08:37.280 fused_ordering(173) 00:08:37.280 fused_ordering(174) 00:08:37.280 fused_ordering(175) 00:08:37.280 fused_ordering(176) 00:08:37.280 fused_ordering(177) 00:08:37.280 fused_ordering(178) 00:08:37.280 fused_ordering(179) 00:08:37.280 fused_ordering(180) 00:08:37.280 fused_ordering(181) 00:08:37.280 fused_ordering(182) 00:08:37.280 fused_ordering(183) 00:08:37.280 fused_ordering(184) 00:08:37.280 fused_ordering(185) 00:08:37.280 fused_ordering(186) 00:08:37.280 fused_ordering(187) 00:08:37.280 fused_ordering(188) 00:08:37.280 fused_ordering(189) 00:08:37.280 fused_ordering(190) 00:08:37.280 fused_ordering(191) 00:08:37.280 fused_ordering(192) 00:08:37.280 fused_ordering(193) 00:08:37.280 fused_ordering(194) 00:08:37.280 fused_ordering(195) 00:08:37.280 fused_ordering(196) 00:08:37.280 fused_ordering(197) 00:08:37.280 fused_ordering(198) 00:08:37.280 fused_ordering(199) 00:08:37.280 fused_ordering(200) 00:08:37.280 fused_ordering(201) 00:08:37.280 fused_ordering(202) 00:08:37.280 fused_ordering(203) 00:08:37.280 fused_ordering(204) 00:08:37.280 fused_ordering(205) 00:08:37.844 fused_ordering(206) 00:08:37.844 fused_ordering(207) 00:08:37.844 fused_ordering(208) 00:08:37.844 fused_ordering(209) 00:08:37.844 fused_ordering(210) 00:08:37.844 fused_ordering(211) 00:08:37.844 fused_ordering(212) 00:08:37.844 fused_ordering(213) 00:08:37.844 fused_ordering(214) 00:08:37.844 fused_ordering(215) 00:08:37.844 fused_ordering(216) 00:08:37.844 fused_ordering(217) 00:08:37.844 fused_ordering(218) 00:08:37.844 fused_ordering(219) 00:08:37.844 fused_ordering(220) 00:08:37.844 fused_ordering(221) 00:08:37.844 fused_ordering(222) 00:08:37.844 fused_ordering(223) 00:08:37.844 fused_ordering(224) 00:08:37.844 fused_ordering(225) 00:08:37.844 fused_ordering(226) 00:08:37.844 fused_ordering(227) 00:08:37.844 fused_ordering(228) 00:08:37.844 fused_ordering(229) 00:08:37.844 fused_ordering(230) 00:08:37.844 fused_ordering(231) 00:08:37.844 fused_ordering(232) 00:08:37.844 fused_ordering(233) 00:08:37.844 fused_ordering(234) 00:08:37.844 fused_ordering(235) 00:08:37.844 fused_ordering(236) 00:08:37.844 fused_ordering(237) 00:08:37.844 fused_ordering(238) 00:08:37.844 fused_ordering(239) 00:08:37.844 fused_ordering(240) 00:08:37.844 fused_ordering(241) 00:08:37.844 fused_ordering(242) 00:08:37.844 fused_ordering(243) 00:08:37.844 fused_ordering(244) 00:08:37.844 fused_ordering(245) 00:08:37.844 fused_ordering(246) 00:08:37.844 fused_ordering(247) 00:08:37.844 fused_ordering(248) 00:08:37.844 fused_ordering(249) 00:08:37.844 fused_ordering(250) 00:08:37.844 fused_ordering(251) 00:08:37.844 fused_ordering(252) 00:08:37.844 fused_ordering(253) 00:08:37.844 fused_ordering(254) 00:08:37.844 fused_ordering(255) 00:08:37.844 fused_ordering(256) 00:08:37.844 fused_ordering(257) 00:08:37.844 fused_ordering(258) 00:08:37.844 fused_ordering(259) 00:08:37.844 fused_ordering(260) 00:08:37.844 fused_ordering(261) 00:08:37.844 fused_ordering(262) 00:08:37.844 fused_ordering(263) 00:08:37.844 fused_ordering(264) 00:08:37.844 fused_ordering(265) 00:08:37.844 fused_ordering(266) 00:08:37.844 fused_ordering(267) 00:08:37.844 fused_ordering(268) 00:08:37.844 fused_ordering(269) 00:08:37.844 fused_ordering(270) 00:08:37.844 fused_ordering(271) 00:08:37.844 fused_ordering(272) 00:08:37.844 fused_ordering(273) 00:08:37.844 fused_ordering(274) 00:08:37.844 fused_ordering(275) 00:08:37.844 fused_ordering(276) 00:08:37.844 fused_ordering(277) 00:08:37.844 fused_ordering(278) 00:08:37.844 fused_ordering(279) 00:08:37.844 fused_ordering(280) 00:08:37.844 fused_ordering(281) 00:08:37.844 fused_ordering(282) 00:08:37.844 fused_ordering(283) 00:08:37.844 fused_ordering(284) 00:08:37.844 fused_ordering(285) 00:08:37.844 fused_ordering(286) 00:08:37.844 fused_ordering(287) 00:08:37.844 fused_ordering(288) 00:08:37.844 fused_ordering(289) 00:08:37.844 fused_ordering(290) 00:08:37.844 fused_ordering(291) 00:08:37.844 fused_ordering(292) 00:08:37.844 fused_ordering(293) 00:08:37.844 fused_ordering(294) 00:08:37.844 fused_ordering(295) 00:08:37.844 fused_ordering(296) 00:08:37.844 fused_ordering(297) 00:08:37.844 fused_ordering(298) 00:08:37.844 fused_ordering(299) 00:08:37.844 fused_ordering(300) 00:08:37.844 fused_ordering(301) 00:08:37.844 fused_ordering(302) 00:08:37.844 fused_ordering(303) 00:08:37.844 fused_ordering(304) 00:08:37.844 fused_ordering(305) 00:08:37.844 fused_ordering(306) 00:08:37.844 fused_ordering(307) 00:08:37.844 fused_ordering(308) 00:08:37.844 fused_ordering(309) 00:08:37.844 fused_ordering(310) 00:08:37.844 fused_ordering(311) 00:08:37.844 fused_ordering(312) 00:08:37.844 fused_ordering(313) 00:08:37.844 fused_ordering(314) 00:08:37.844 fused_ordering(315) 00:08:37.844 fused_ordering(316) 00:08:37.844 fused_ordering(317) 00:08:37.844 fused_ordering(318) 00:08:37.844 fused_ordering(319) 00:08:37.844 fused_ordering(320) 00:08:37.844 fused_ordering(321) 00:08:37.844 fused_ordering(322) 00:08:37.844 fused_ordering(323) 00:08:37.844 fused_ordering(324) 00:08:37.844 fused_ordering(325) 00:08:37.844 fused_ordering(326) 00:08:37.844 fused_ordering(327) 00:08:37.844 fused_ordering(328) 00:08:37.844 fused_ordering(329) 00:08:37.844 fused_ordering(330) 00:08:37.844 fused_ordering(331) 00:08:37.844 fused_ordering(332) 00:08:37.844 fused_ordering(333) 00:08:37.844 fused_ordering(334) 00:08:37.844 fused_ordering(335) 00:08:37.844 fused_ordering(336) 00:08:37.844 fused_ordering(337) 00:08:37.844 fused_ordering(338) 00:08:37.844 fused_ordering(339) 00:08:37.844 fused_ordering(340) 00:08:37.844 fused_ordering(341) 00:08:37.844 fused_ordering(342) 00:08:37.844 fused_ordering(343) 00:08:37.844 fused_ordering(344) 00:08:37.844 fused_ordering(345) 00:08:37.844 fused_ordering(346) 00:08:37.844 fused_ordering(347) 00:08:37.844 fused_ordering(348) 00:08:37.844 fused_ordering(349) 00:08:37.844 fused_ordering(350) 00:08:37.844 fused_ordering(351) 00:08:37.844 fused_ordering(352) 00:08:37.844 fused_ordering(353) 00:08:37.844 fused_ordering(354) 00:08:37.844 fused_ordering(355) 00:08:37.844 fused_ordering(356) 00:08:37.844 fused_ordering(357) 00:08:37.844 fused_ordering(358) 00:08:37.844 fused_ordering(359) 00:08:37.844 fused_ordering(360) 00:08:37.844 fused_ordering(361) 00:08:37.844 fused_ordering(362) 00:08:37.844 fused_ordering(363) 00:08:37.844 fused_ordering(364) 00:08:37.844 fused_ordering(365) 00:08:37.844 fused_ordering(366) 00:08:37.844 fused_ordering(367) 00:08:37.844 fused_ordering(368) 00:08:37.844 fused_ordering(369) 00:08:37.844 fused_ordering(370) 00:08:37.844 fused_ordering(371) 00:08:37.844 fused_ordering(372) 00:08:37.844 fused_ordering(373) 00:08:37.844 fused_ordering(374) 00:08:37.844 fused_ordering(375) 00:08:37.844 fused_ordering(376) 00:08:37.844 fused_ordering(377) 00:08:37.844 fused_ordering(378) 00:08:37.844 fused_ordering(379) 00:08:37.844 fused_ordering(380) 00:08:37.844 fused_ordering(381) 00:08:37.844 fused_ordering(382) 00:08:37.844 fused_ordering(383) 00:08:37.844 fused_ordering(384) 00:08:37.844 fused_ordering(385) 00:08:37.844 fused_ordering(386) 00:08:37.844 fused_ordering(387) 00:08:37.844 fused_ordering(388) 00:08:37.844 fused_ordering(389) 00:08:37.844 fused_ordering(390) 00:08:37.844 fused_ordering(391) 00:08:37.844 fused_ordering(392) 00:08:37.844 fused_ordering(393) 00:08:37.844 fused_ordering(394) 00:08:37.844 fused_ordering(395) 00:08:37.844 fused_ordering(396) 00:08:37.844 fused_ordering(397) 00:08:37.844 fused_ordering(398) 00:08:37.844 fused_ordering(399) 00:08:37.844 fused_ordering(400) 00:08:37.844 fused_ordering(401) 00:08:37.844 fused_ordering(402) 00:08:37.844 fused_ordering(403) 00:08:37.844 fused_ordering(404) 00:08:37.844 fused_ordering(405) 00:08:37.844 fused_ordering(406) 00:08:37.844 fused_ordering(407) 00:08:37.844 fused_ordering(408) 00:08:37.844 fused_ordering(409) 00:08:37.844 fused_ordering(410) 00:08:38.777 fused_ordering(411) 00:08:38.777 fused_ordering(412) 00:08:38.777 fused_ordering(413) 00:08:38.777 fused_ordering(414) 00:08:38.777 fused_ordering(415) 00:08:38.777 fused_ordering(416) 00:08:38.777 fused_ordering(417) 00:08:38.777 fused_ordering(418) 00:08:38.777 fused_ordering(419) 00:08:38.777 fused_ordering(420) 00:08:38.777 fused_ordering(421) 00:08:38.777 fused_ordering(422) 00:08:38.777 fused_ordering(423) 00:08:38.777 fused_ordering(424) 00:08:38.777 fused_ordering(425) 00:08:38.777 fused_ordering(426) 00:08:38.777 fused_ordering(427) 00:08:38.777 fused_ordering(428) 00:08:38.777 fused_ordering(429) 00:08:38.777 fused_ordering(430) 00:08:38.777 fused_ordering(431) 00:08:38.777 fused_ordering(432) 00:08:38.777 fused_ordering(433) 00:08:38.777 fused_ordering(434) 00:08:38.777 fused_ordering(435) 00:08:38.777 fused_ordering(436) 00:08:38.777 fused_ordering(437) 00:08:38.777 fused_ordering(438) 00:08:38.777 fused_ordering(439) 00:08:38.777 fused_ordering(440) 00:08:38.777 fused_ordering(441) 00:08:38.777 fused_ordering(442) 00:08:38.777 fused_ordering(443) 00:08:38.777 fused_ordering(444) 00:08:38.777 fused_ordering(445) 00:08:38.777 fused_ordering(446) 00:08:38.777 fused_ordering(447) 00:08:38.777 fused_ordering(448) 00:08:38.777 fused_ordering(449) 00:08:38.777 fused_ordering(450) 00:08:38.777 fused_ordering(451) 00:08:38.777 fused_ordering(452) 00:08:38.777 fused_ordering(453) 00:08:38.777 fused_ordering(454) 00:08:38.777 fused_ordering(455) 00:08:38.777 fused_ordering(456) 00:08:38.777 fused_ordering(457) 00:08:38.777 fused_ordering(458) 00:08:38.777 fused_ordering(459) 00:08:38.777 fused_ordering(460) 00:08:38.777 fused_ordering(461) 00:08:38.777 fused_ordering(462) 00:08:38.777 fused_ordering(463) 00:08:38.777 fused_ordering(464) 00:08:38.777 fused_ordering(465) 00:08:38.777 fused_ordering(466) 00:08:38.777 fused_ordering(467) 00:08:38.777 fused_ordering(468) 00:08:38.777 fused_ordering(469) 00:08:38.777 fused_ordering(470) 00:08:38.777 fused_ordering(471) 00:08:38.777 fused_ordering(472) 00:08:38.777 fused_ordering(473) 00:08:38.777 fused_ordering(474) 00:08:38.777 fused_ordering(475) 00:08:38.777 fused_ordering(476) 00:08:38.777 fused_ordering(477) 00:08:38.777 fused_ordering(478) 00:08:38.777 fused_ordering(479) 00:08:38.777 fused_ordering(480) 00:08:38.777 fused_ordering(481) 00:08:38.777 fused_ordering(482) 00:08:38.777 fused_ordering(483) 00:08:38.777 fused_ordering(484) 00:08:38.777 fused_ordering(485) 00:08:38.777 fused_ordering(486) 00:08:38.777 fused_ordering(487) 00:08:38.777 fused_ordering(488) 00:08:38.777 fused_ordering(489) 00:08:38.777 fused_ordering(490) 00:08:38.777 fused_ordering(491) 00:08:38.777 fused_ordering(492) 00:08:38.777 fused_ordering(493) 00:08:38.777 fused_ordering(494) 00:08:38.777 fused_ordering(495) 00:08:38.777 fused_ordering(496) 00:08:38.777 fused_ordering(497) 00:08:38.777 fused_ordering(498) 00:08:38.777 fused_ordering(499) 00:08:38.777 fused_ordering(500) 00:08:38.777 fused_ordering(501) 00:08:38.777 fused_ordering(502) 00:08:38.777 fused_ordering(503) 00:08:38.777 fused_ordering(504) 00:08:38.777 fused_ordering(505) 00:08:38.777 fused_ordering(506) 00:08:38.777 fused_ordering(507) 00:08:38.777 fused_ordering(508) 00:08:38.777 fused_ordering(509) 00:08:38.777 fused_ordering(510) 00:08:38.777 fused_ordering(511) 00:08:38.777 fused_ordering(512) 00:08:38.778 fused_ordering(513) 00:08:38.778 fused_ordering(514) 00:08:38.778 fused_ordering(515) 00:08:38.778 fused_ordering(516) 00:08:38.778 fused_ordering(517) 00:08:38.778 fused_ordering(518) 00:08:38.778 fused_ordering(519) 00:08:38.778 fused_ordering(520) 00:08:38.778 fused_ordering(521) 00:08:38.778 fused_ordering(522) 00:08:38.778 fused_ordering(523) 00:08:38.778 fused_ordering(524) 00:08:38.778 fused_ordering(525) 00:08:38.778 fused_ordering(526) 00:08:38.778 fused_ordering(527) 00:08:38.778 fused_ordering(528) 00:08:38.778 fused_ordering(529) 00:08:38.778 fused_ordering(530) 00:08:38.778 fused_ordering(531) 00:08:38.778 fused_ordering(532) 00:08:38.778 fused_ordering(533) 00:08:38.778 fused_ordering(534) 00:08:38.778 fused_ordering(535) 00:08:38.778 fused_ordering(536) 00:08:38.778 fused_ordering(537) 00:08:38.778 fused_ordering(538) 00:08:38.778 fused_ordering(539) 00:08:38.778 fused_ordering(540) 00:08:38.778 fused_ordering(541) 00:08:38.778 fused_ordering(542) 00:08:38.778 fused_ordering(543) 00:08:38.778 fused_ordering(544) 00:08:38.778 fused_ordering(545) 00:08:38.778 fused_ordering(546) 00:08:38.778 fused_ordering(547) 00:08:38.778 fused_ordering(548) 00:08:38.778 fused_ordering(549) 00:08:38.778 fused_ordering(550) 00:08:38.778 fused_ordering(551) 00:08:38.778 fused_ordering(552) 00:08:38.778 fused_ordering(553) 00:08:38.778 fused_ordering(554) 00:08:38.778 fused_ordering(555) 00:08:38.778 fused_ordering(556) 00:08:38.778 fused_ordering(557) 00:08:38.778 fused_ordering(558) 00:08:38.778 fused_ordering(559) 00:08:38.778 fused_ordering(560) 00:08:38.778 fused_ordering(561) 00:08:38.778 fused_ordering(562) 00:08:38.778 fused_ordering(563) 00:08:38.778 fused_ordering(564) 00:08:38.778 fused_ordering(565) 00:08:38.778 fused_ordering(566) 00:08:38.778 fused_ordering(567) 00:08:38.778 fused_ordering(568) 00:08:38.778 fused_ordering(569) 00:08:38.778 fused_ordering(570) 00:08:38.778 fused_ordering(571) 00:08:38.778 fused_ordering(572) 00:08:38.778 fused_ordering(573) 00:08:38.778 fused_ordering(574) 00:08:38.778 fused_ordering(575) 00:08:38.778 fused_ordering(576) 00:08:38.778 fused_ordering(577) 00:08:38.778 fused_ordering(578) 00:08:38.778 fused_ordering(579) 00:08:38.778 fused_ordering(580) 00:08:38.778 fused_ordering(581) 00:08:38.778 fused_ordering(582) 00:08:38.778 fused_ordering(583) 00:08:38.778 fused_ordering(584) 00:08:38.778 fused_ordering(585) 00:08:38.778 fused_ordering(586) 00:08:38.778 fused_ordering(587) 00:08:38.778 fused_ordering(588) 00:08:38.778 fused_ordering(589) 00:08:38.778 fused_ordering(590) 00:08:38.778 fused_ordering(591) 00:08:38.778 fused_ordering(592) 00:08:38.778 fused_ordering(593) 00:08:38.778 fused_ordering(594) 00:08:38.778 fused_ordering(595) 00:08:38.778 fused_ordering(596) 00:08:38.778 fused_ordering(597) 00:08:38.778 fused_ordering(598) 00:08:38.778 fused_ordering(599) 00:08:38.778 fused_ordering(600) 00:08:38.778 fused_ordering(601) 00:08:38.778 fused_ordering(602) 00:08:38.778 fused_ordering(603) 00:08:38.778 fused_ordering(604) 00:08:38.778 fused_ordering(605) 00:08:38.778 fused_ordering(606) 00:08:38.778 fused_ordering(607) 00:08:38.778 fused_ordering(608) 00:08:38.778 fused_ordering(609) 00:08:38.778 fused_ordering(610) 00:08:38.778 fused_ordering(611) 00:08:38.778 fused_ordering(612) 00:08:38.778 fused_ordering(613) 00:08:38.778 fused_ordering(614) 00:08:38.778 fused_ordering(615) 00:08:39.344 fused_ordering(616) 00:08:39.344 fused_ordering(617) 00:08:39.344 fused_ordering(618) 00:08:39.344 fused_ordering(619) 00:08:39.344 fused_ordering(620) 00:08:39.344 fused_ordering(621) 00:08:39.344 fused_ordering(622) 00:08:39.344 fused_ordering(623) 00:08:39.344 fused_ordering(624) 00:08:39.344 fused_ordering(625) 00:08:39.344 fused_ordering(626) 00:08:39.344 fused_ordering(627) 00:08:39.344 fused_ordering(628) 00:08:39.344 fused_ordering(629) 00:08:39.344 fused_ordering(630) 00:08:39.344 fused_ordering(631) 00:08:39.344 fused_ordering(632) 00:08:39.344 fused_ordering(633) 00:08:39.344 fused_ordering(634) 00:08:39.344 fused_ordering(635) 00:08:39.344 fused_ordering(636) 00:08:39.344 fused_ordering(637) 00:08:39.344 fused_ordering(638) 00:08:39.344 fused_ordering(639) 00:08:39.344 fused_ordering(640) 00:08:39.344 fused_ordering(641) 00:08:39.344 fused_ordering(642) 00:08:39.344 fused_ordering(643) 00:08:39.344 fused_ordering(644) 00:08:39.344 fused_ordering(645) 00:08:39.344 fused_ordering(646) 00:08:39.344 fused_ordering(647) 00:08:39.344 fused_ordering(648) 00:08:39.344 fused_ordering(649) 00:08:39.344 fused_ordering(650) 00:08:39.344 fused_ordering(651) 00:08:39.344 fused_ordering(652) 00:08:39.344 fused_ordering(653) 00:08:39.344 fused_ordering(654) 00:08:39.344 fused_ordering(655) 00:08:39.344 fused_ordering(656) 00:08:39.344 fused_ordering(657) 00:08:39.344 fused_ordering(658) 00:08:39.344 fused_ordering(659) 00:08:39.344 fused_ordering(660) 00:08:39.344 fused_ordering(661) 00:08:39.344 fused_ordering(662) 00:08:39.344 fused_ordering(663) 00:08:39.344 fused_ordering(664) 00:08:39.344 fused_ordering(665) 00:08:39.344 fused_ordering(666) 00:08:39.344 fused_ordering(667) 00:08:39.344 fused_ordering(668) 00:08:39.344 fused_ordering(669) 00:08:39.344 fused_ordering(670) 00:08:39.344 fused_ordering(671) 00:08:39.344 fused_ordering(672) 00:08:39.344 fused_ordering(673) 00:08:39.344 fused_ordering(674) 00:08:39.344 fused_ordering(675) 00:08:39.344 fused_ordering(676) 00:08:39.344 fused_ordering(677) 00:08:39.344 fused_ordering(678) 00:08:39.344 fused_ordering(679) 00:08:39.344 fused_ordering(680) 00:08:39.344 fused_ordering(681) 00:08:39.344 fused_ordering(682) 00:08:39.344 fused_ordering(683) 00:08:39.344 fused_ordering(684) 00:08:39.344 fused_ordering(685) 00:08:39.344 fused_ordering(686) 00:08:39.344 fused_ordering(687) 00:08:39.344 fused_ordering(688) 00:08:39.344 fused_ordering(689) 00:08:39.344 fused_ordering(690) 00:08:39.344 fused_ordering(691) 00:08:39.344 fused_ordering(692) 00:08:39.344 fused_ordering(693) 00:08:39.344 fused_ordering(694) 00:08:39.344 fused_ordering(695) 00:08:39.344 fused_ordering(696) 00:08:39.344 fused_ordering(697) 00:08:39.344 fused_ordering(698) 00:08:39.344 fused_ordering(699) 00:08:39.344 fused_ordering(700) 00:08:39.344 fused_ordering(701) 00:08:39.344 fused_ordering(702) 00:08:39.344 fused_ordering(703) 00:08:39.344 fused_ordering(704) 00:08:39.344 fused_ordering(705) 00:08:39.344 fused_ordering(706) 00:08:39.344 fused_ordering(707) 00:08:39.344 fused_ordering(708) 00:08:39.344 fused_ordering(709) 00:08:39.344 fused_ordering(710) 00:08:39.344 fused_ordering(711) 00:08:39.344 fused_ordering(712) 00:08:39.344 fused_ordering(713) 00:08:39.344 fused_ordering(714) 00:08:39.344 fused_ordering(715) 00:08:39.344 fused_ordering(716) 00:08:39.344 fused_ordering(717) 00:08:39.344 fused_ordering(718) 00:08:39.344 fused_ordering(719) 00:08:39.344 fused_ordering(720) 00:08:39.344 fused_ordering(721) 00:08:39.344 fused_ordering(722) 00:08:39.344 fused_ordering(723) 00:08:39.344 fused_ordering(724) 00:08:39.344 fused_ordering(725) 00:08:39.344 fused_ordering(726) 00:08:39.344 fused_ordering(727) 00:08:39.344 fused_ordering(728) 00:08:39.344 fused_ordering(729) 00:08:39.344 fused_ordering(730) 00:08:39.344 fused_ordering(731) 00:08:39.344 fused_ordering(732) 00:08:39.344 fused_ordering(733) 00:08:39.344 fused_ordering(734) 00:08:39.344 fused_ordering(735) 00:08:39.344 fused_ordering(736) 00:08:39.344 fused_ordering(737) 00:08:39.344 fused_ordering(738) 00:08:39.344 fused_ordering(739) 00:08:39.344 fused_ordering(740) 00:08:39.344 fused_ordering(741) 00:08:39.344 fused_ordering(742) 00:08:39.344 fused_ordering(743) 00:08:39.344 fused_ordering(744) 00:08:39.344 fused_ordering(745) 00:08:39.344 fused_ordering(746) 00:08:39.344 fused_ordering(747) 00:08:39.344 fused_ordering(748) 00:08:39.344 fused_ordering(749) 00:08:39.344 fused_ordering(750) 00:08:39.344 fused_ordering(751) 00:08:39.344 fused_ordering(752) 00:08:39.344 fused_ordering(753) 00:08:39.344 fused_ordering(754) 00:08:39.344 fused_ordering(755) 00:08:39.344 fused_ordering(756) 00:08:39.344 fused_ordering(757) 00:08:39.344 fused_ordering(758) 00:08:39.344 fused_ordering(759) 00:08:39.344 fused_ordering(760) 00:08:39.344 fused_ordering(761) 00:08:39.344 fused_ordering(762) 00:08:39.344 fused_ordering(763) 00:08:39.344 fused_ordering(764) 00:08:39.344 fused_ordering(765) 00:08:39.344 fused_ordering(766) 00:08:39.344 fused_ordering(767) 00:08:39.344 fused_ordering(768) 00:08:39.344 fused_ordering(769) 00:08:39.344 fused_ordering(770) 00:08:39.344 fused_ordering(771) 00:08:39.344 fused_ordering(772) 00:08:39.344 fused_ordering(773) 00:08:39.344 fused_ordering(774) 00:08:39.344 fused_ordering(775) 00:08:39.345 fused_ordering(776) 00:08:39.345 fused_ordering(777) 00:08:39.345 fused_ordering(778) 00:08:39.345 fused_ordering(779) 00:08:39.345 fused_ordering(780) 00:08:39.345 fused_ordering(781) 00:08:39.345 fused_ordering(782) 00:08:39.345 fused_ordering(783) 00:08:39.345 fused_ordering(784) 00:08:39.345 fused_ordering(785) 00:08:39.345 fused_ordering(786) 00:08:39.345 fused_ordering(787) 00:08:39.345 fused_ordering(788) 00:08:39.345 fused_ordering(789) 00:08:39.345 fused_ordering(790) 00:08:39.345 fused_ordering(791) 00:08:39.345 fused_ordering(792) 00:08:39.345 fused_ordering(793) 00:08:39.345 fused_ordering(794) 00:08:39.345 fused_ordering(795) 00:08:39.345 fused_ordering(796) 00:08:39.345 fused_ordering(797) 00:08:39.345 fused_ordering(798) 00:08:39.345 fused_ordering(799) 00:08:39.345 fused_ordering(800) 00:08:39.345 fused_ordering(801) 00:08:39.345 fused_ordering(802) 00:08:39.345 fused_ordering(803) 00:08:39.345 fused_ordering(804) 00:08:39.345 fused_ordering(805) 00:08:39.345 fused_ordering(806) 00:08:39.345 fused_ordering(807) 00:08:39.345 fused_ordering(808) 00:08:39.345 fused_ordering(809) 00:08:39.345 fused_ordering(810) 00:08:39.345 fused_ordering(811) 00:08:39.345 fused_ordering(812) 00:08:39.345 fused_ordering(813) 00:08:39.345 fused_ordering(814) 00:08:39.345 fused_ordering(815) 00:08:39.345 fused_ordering(816) 00:08:39.345 fused_ordering(817) 00:08:39.345 fused_ordering(818) 00:08:39.345 fused_ordering(819) 00:08:39.345 fused_ordering(820) 00:08:40.278 fused_ordering(821) 00:08:40.278 fused_ordering(822) 00:08:40.278 fused_ordering(823) 00:08:40.278 fused_ordering(824) 00:08:40.278 fused_ordering(825) 00:08:40.278 fused_ordering(826) 00:08:40.278 fused_ordering(827) 00:08:40.278 fused_ordering(828) 00:08:40.278 fused_ordering(829) 00:08:40.278 fused_ordering(830) 00:08:40.278 fused_ordering(831) 00:08:40.278 fused_ordering(832) 00:08:40.278 fused_ordering(833) 00:08:40.278 fused_ordering(834) 00:08:40.278 fused_ordering(835) 00:08:40.278 fused_ordering(836) 00:08:40.278 fused_ordering(837) 00:08:40.278 fused_ordering(838) 00:08:40.278 fused_ordering(839) 00:08:40.278 fused_ordering(840) 00:08:40.278 fused_ordering(841) 00:08:40.278 fused_ordering(842) 00:08:40.278 fused_ordering(843) 00:08:40.278 fused_ordering(844) 00:08:40.278 fused_ordering(845) 00:08:40.278 fused_ordering(846) 00:08:40.278 fused_ordering(847) 00:08:40.278 fused_ordering(848) 00:08:40.278 fused_ordering(849) 00:08:40.278 fused_ordering(850) 00:08:40.278 fused_ordering(851) 00:08:40.278 fused_ordering(852) 00:08:40.278 fused_ordering(853) 00:08:40.278 fused_ordering(854) 00:08:40.278 fused_ordering(855) 00:08:40.278 fused_ordering(856) 00:08:40.278 fused_ordering(857) 00:08:40.278 fused_ordering(858) 00:08:40.278 fused_ordering(859) 00:08:40.278 fused_ordering(860) 00:08:40.278 fused_ordering(861) 00:08:40.278 fused_ordering(862) 00:08:40.278 fused_ordering(863) 00:08:40.278 fused_ordering(864) 00:08:40.278 fused_ordering(865) 00:08:40.278 fused_ordering(866) 00:08:40.278 fused_ordering(867) 00:08:40.278 fused_ordering(868) 00:08:40.278 fused_ordering(869) 00:08:40.278 fused_ordering(870) 00:08:40.278 fused_ordering(871) 00:08:40.278 fused_ordering(872) 00:08:40.278 fused_ordering(873) 00:08:40.278 fused_ordering(874) 00:08:40.278 fused_ordering(875) 00:08:40.278 fused_ordering(876) 00:08:40.278 fused_ordering(877) 00:08:40.278 fused_ordering(878) 00:08:40.278 fused_ordering(879) 00:08:40.278 fused_ordering(880) 00:08:40.278 fused_ordering(881) 00:08:40.278 fused_ordering(882) 00:08:40.278 fused_ordering(883) 00:08:40.278 fused_ordering(884) 00:08:40.278 fused_ordering(885) 00:08:40.278 fused_ordering(886) 00:08:40.278 fused_ordering(887) 00:08:40.278 fused_ordering(888) 00:08:40.278 fused_ordering(889) 00:08:40.278 fused_ordering(890) 00:08:40.278 fused_ordering(891) 00:08:40.278 fused_ordering(892) 00:08:40.278 fused_ordering(893) 00:08:40.278 fused_ordering(894) 00:08:40.278 fused_ordering(895) 00:08:40.278 fused_ordering(896) 00:08:40.278 fused_ordering(897) 00:08:40.278 fused_ordering(898) 00:08:40.278 fused_ordering(899) 00:08:40.278 fused_ordering(900) 00:08:40.278 fused_ordering(901) 00:08:40.278 fused_ordering(902) 00:08:40.278 fused_ordering(903) 00:08:40.278 fused_ordering(904) 00:08:40.278 fused_ordering(905) 00:08:40.278 fused_ordering(906) 00:08:40.278 fused_ordering(907) 00:08:40.278 fused_ordering(908) 00:08:40.278 fused_ordering(909) 00:08:40.278 fused_ordering(910) 00:08:40.278 fused_ordering(911) 00:08:40.278 fused_ordering(912) 00:08:40.278 fused_ordering(913) 00:08:40.278 fused_ordering(914) 00:08:40.278 fused_ordering(915) 00:08:40.278 fused_ordering(916) 00:08:40.278 fused_ordering(917) 00:08:40.278 fused_ordering(918) 00:08:40.278 fused_ordering(919) 00:08:40.278 fused_ordering(920) 00:08:40.278 fused_ordering(921) 00:08:40.278 fused_ordering(922) 00:08:40.278 fused_ordering(923) 00:08:40.278 fused_ordering(924) 00:08:40.278 fused_ordering(925) 00:08:40.278 fused_ordering(926) 00:08:40.278 fused_ordering(927) 00:08:40.278 fused_ordering(928) 00:08:40.278 fused_ordering(929) 00:08:40.278 fused_ordering(930) 00:08:40.278 fused_ordering(931) 00:08:40.278 fused_ordering(932) 00:08:40.278 fused_ordering(933) 00:08:40.278 fused_ordering(934) 00:08:40.278 fused_ordering(935) 00:08:40.278 fused_ordering(936) 00:08:40.278 fused_ordering(937) 00:08:40.278 fused_ordering(938) 00:08:40.278 fused_ordering(939) 00:08:40.278 fused_ordering(940) 00:08:40.278 fused_ordering(941) 00:08:40.278 fused_ordering(942) 00:08:40.278 fused_ordering(943) 00:08:40.278 fused_ordering(944) 00:08:40.278 fused_ordering(945) 00:08:40.278 fused_ordering(946) 00:08:40.278 fused_ordering(947) 00:08:40.278 fused_ordering(948) 00:08:40.278 fused_ordering(949) 00:08:40.278 fused_ordering(950) 00:08:40.278 fused_ordering(951) 00:08:40.278 fused_ordering(952) 00:08:40.278 fused_ordering(953) 00:08:40.278 fused_ordering(954) 00:08:40.278 fused_ordering(955) 00:08:40.278 fused_ordering(956) 00:08:40.278 fused_ordering(957) 00:08:40.278 fused_ordering(958) 00:08:40.278 fused_ordering(959) 00:08:40.278 fused_ordering(960) 00:08:40.278 fused_ordering(961) 00:08:40.278 fused_ordering(962) 00:08:40.278 fused_ordering(963) 00:08:40.278 fused_ordering(964) 00:08:40.278 fused_ordering(965) 00:08:40.278 fused_ordering(966) 00:08:40.278 fused_ordering(967) 00:08:40.278 fused_ordering(968) 00:08:40.278 fused_ordering(969) 00:08:40.278 fused_ordering(970) 00:08:40.278 fused_ordering(971) 00:08:40.278 fused_ordering(972) 00:08:40.278 fused_ordering(973) 00:08:40.278 fused_ordering(974) 00:08:40.278 fused_ordering(975) 00:08:40.278 fused_ordering(976) 00:08:40.278 fused_ordering(977) 00:08:40.278 fused_ordering(978) 00:08:40.278 fused_ordering(979) 00:08:40.278 fused_ordering(980) 00:08:40.278 fused_ordering(981) 00:08:40.278 fused_ordering(982) 00:08:40.278 fused_ordering(983) 00:08:40.278 fused_ordering(984) 00:08:40.278 fused_ordering(985) 00:08:40.278 fused_ordering(986) 00:08:40.278 fused_ordering(987) 00:08:40.278 fused_ordering(988) 00:08:40.278 fused_ordering(989) 00:08:40.278 fused_ordering(990) 00:08:40.278 fused_ordering(991) 00:08:40.278 fused_ordering(992) 00:08:40.278 fused_ordering(993) 00:08:40.278 fused_ordering(994) 00:08:40.278 fused_ordering(995) 00:08:40.278 fused_ordering(996) 00:08:40.278 fused_ordering(997) 00:08:40.279 fused_ordering(998) 00:08:40.279 fused_ordering(999) 00:08:40.279 fused_ordering(1000) 00:08:40.279 fused_ordering(1001) 00:08:40.279 fused_ordering(1002) 00:08:40.279 fused_ordering(1003) 00:08:40.279 fused_ordering(1004) 00:08:40.279 fused_ordering(1005) 00:08:40.279 fused_ordering(1006) 00:08:40.279 fused_ordering(1007) 00:08:40.279 fused_ordering(1008) 00:08:40.279 fused_ordering(1009) 00:08:40.279 fused_ordering(1010) 00:08:40.279 fused_ordering(1011) 00:08:40.279 fused_ordering(1012) 00:08:40.279 fused_ordering(1013) 00:08:40.279 fused_ordering(1014) 00:08:40.279 fused_ordering(1015) 00:08:40.279 fused_ordering(1016) 00:08:40.279 fused_ordering(1017) 00:08:40.279 fused_ordering(1018) 00:08:40.279 fused_ordering(1019) 00:08:40.279 fused_ordering(1020) 00:08:40.279 fused_ordering(1021) 00:08:40.279 fused_ordering(1022) 00:08:40.279 fused_ordering(1023) 00:08:40.279 21:23:05 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:40.279 21:23:05 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:40.279 21:23:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:40.279 21:23:05 -- nvmf/common.sh@117 -- # sync 00:08:40.279 21:23:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:40.279 21:23:05 -- nvmf/common.sh@120 -- # set +e 00:08:40.279 21:23:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:40.279 21:23:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:40.279 rmmod nvme_tcp 00:08:40.279 rmmod nvme_fabrics 00:08:40.279 rmmod nvme_keyring 00:08:40.279 21:23:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:40.279 21:23:05 -- nvmf/common.sh@124 -- # set -e 00:08:40.279 21:23:05 -- nvmf/common.sh@125 -- # return 0 00:08:40.279 21:23:05 -- nvmf/common.sh@478 -- # '[' -n 2538845 ']' 00:08:40.279 21:23:05 -- nvmf/common.sh@479 -- # killprocess 2538845 00:08:40.279 21:23:05 -- common/autotest_common.sh@936 -- # '[' -z 2538845 ']' 00:08:40.279 21:23:05 -- common/autotest_common.sh@940 -- # kill -0 2538845 00:08:40.279 21:23:05 -- common/autotest_common.sh@941 -- # uname 00:08:40.279 21:23:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:40.279 21:23:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2538845 00:08:40.279 21:23:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:40.279 21:23:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:40.279 21:23:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2538845' 00:08:40.279 killing process with pid 2538845 00:08:40.279 21:23:05 -- common/autotest_common.sh@955 -- # kill 2538845 00:08:40.279 21:23:05 -- common/autotest_common.sh@960 -- # wait 2538845 00:08:40.538 21:23:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:40.538 21:23:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:40.538 21:23:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:40.538 21:23:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:40.538 21:23:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:40.538 21:23:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.538 21:23:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.538 21:23:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.075 21:23:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:43.075 00:08:43.075 real 0m9.344s 00:08:43.075 user 0m7.220s 00:08:43.075 sys 0m4.175s 00:08:43.075 21:23:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:43.075 21:23:08 -- common/autotest_common.sh@10 -- # set +x 00:08:43.075 ************************************ 00:08:43.075 END TEST nvmf_fused_ordering 00:08:43.075 ************************************ 00:08:43.075 21:23:08 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:43.075 21:23:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:43.075 21:23:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.075 21:23:08 -- common/autotest_common.sh@10 -- # set +x 00:08:43.075 ************************************ 00:08:43.075 START TEST nvmf_delete_subsystem 00:08:43.075 ************************************ 00:08:43.075 21:23:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:43.075 * Looking for test storage... 00:08:43.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:43.075 21:23:08 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.075 21:23:08 -- nvmf/common.sh@7 -- # uname -s 00:08:43.075 21:23:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.075 21:23:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.075 21:23:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.075 21:23:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.075 21:23:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.075 21:23:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.075 21:23:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.075 21:23:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.075 21:23:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.075 21:23:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.075 21:23:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:43.075 21:23:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:43.075 21:23:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.075 21:23:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.075 21:23:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.075 21:23:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.075 21:23:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:43.075 21:23:08 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.075 21:23:08 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.075 21:23:08 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.075 21:23:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.075 21:23:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.075 21:23:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.075 21:23:08 -- paths/export.sh@5 -- # export PATH 00:08:43.075 21:23:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.075 21:23:08 -- nvmf/common.sh@47 -- # : 0 00:08:43.075 21:23:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.075 21:23:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.075 21:23:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.075 21:23:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.075 21:23:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.075 21:23:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.075 21:23:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.075 21:23:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.075 21:23:08 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:43.075 21:23:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:43.075 21:23:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.075 21:23:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:43.075 21:23:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:43.075 21:23:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:43.075 21:23:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.076 21:23:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.076 21:23:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.076 21:23:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:43.076 21:23:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:43.076 21:23:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:43.076 21:23:08 -- common/autotest_common.sh@10 -- # set +x 00:08:44.977 21:23:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:44.977 21:23:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:44.977 21:23:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:44.977 21:23:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:44.977 21:23:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:44.977 21:23:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:44.977 21:23:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:44.977 21:23:10 -- nvmf/common.sh@295 -- # net_devs=() 00:08:44.977 21:23:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:44.977 21:23:10 -- nvmf/common.sh@296 -- # e810=() 00:08:44.977 21:23:10 -- nvmf/common.sh@296 -- # local -ga e810 00:08:44.977 21:23:10 -- nvmf/common.sh@297 -- # x722=() 00:08:44.977 21:23:10 -- nvmf/common.sh@297 -- # local -ga x722 00:08:44.977 21:23:10 -- nvmf/common.sh@298 -- # mlx=() 00:08:44.977 21:23:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:44.977 21:23:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:44.977 21:23:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:44.977 21:23:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:44.977 21:23:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:44.977 21:23:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:44.977 21:23:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:44.977 21:23:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:44.977 21:23:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:44.977 21:23:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:44.977 21:23:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:44.977 21:23:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:44.977 21:23:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:44.977 21:23:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:44.977 21:23:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:44.977 21:23:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:44.977 21:23:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:44.977 21:23:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:44.978 21:23:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:44.978 21:23:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:44.978 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:44.978 21:23:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:44.978 21:23:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:44.978 21:23:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.978 21:23:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.978 21:23:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:44.978 21:23:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:44.978 21:23:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:44.978 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:44.978 21:23:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:44.978 21:23:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:44.978 21:23:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.978 21:23:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.978 21:23:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:44.978 21:23:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:44.978 21:23:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:44.978 21:23:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:44.978 21:23:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:44.978 21:23:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.978 21:23:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:44.978 21:23:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.978 21:23:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:44.978 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:44.978 21:23:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.978 21:23:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:44.978 21:23:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.978 21:23:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:44.978 21:23:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.978 21:23:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:44.978 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:44.978 21:23:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.978 21:23:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:44.978 21:23:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:44.978 21:23:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:44.978 21:23:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:44.978 21:23:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:44.978 21:23:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.978 21:23:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.978 21:23:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:44.978 21:23:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:44.978 21:23:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:44.978 21:23:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:44.978 21:23:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:44.978 21:23:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:44.978 21:23:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.978 21:23:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:44.978 21:23:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:44.978 21:23:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:44.978 21:23:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:44.978 21:23:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:44.978 21:23:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:44.978 21:23:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:44.978 21:23:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:44.978 21:23:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:44.978 21:23:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:44.978 21:23:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:44.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:08:44.978 00:08:44.978 --- 10.0.0.2 ping statistics --- 00:08:44.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.978 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:08:44.978 21:23:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:44.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:08:44.978 00:08:44.978 --- 10.0.0.1 ping statistics --- 00:08:44.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.978 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:08:44.978 21:23:10 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.978 21:23:10 -- nvmf/common.sh@411 -- # return 0 00:08:44.978 21:23:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:44.978 21:23:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.978 21:23:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:44.978 21:23:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:44.978 21:23:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.978 21:23:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:44.978 21:23:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:44.978 21:23:10 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:44.978 21:23:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:44.978 21:23:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:44.978 21:23:10 -- common/autotest_common.sh@10 -- # set +x 00:08:44.978 21:23:10 -- nvmf/common.sh@470 -- # nvmfpid=2541339 00:08:44.978 21:23:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:44.978 21:23:10 -- nvmf/common.sh@471 -- # waitforlisten 2541339 00:08:44.978 21:23:10 -- common/autotest_common.sh@817 -- # '[' -z 2541339 ']' 00:08:44.978 21:23:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.978 21:23:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:44.978 21:23:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.978 21:23:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:44.978 21:23:10 -- common/autotest_common.sh@10 -- # set +x 00:08:45.237 [2024-04-24 21:23:10.666865] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:45.237 [2024-04-24 21:23:10.666954] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.237 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.237 [2024-04-24 21:23:10.743674] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:45.237 [2024-04-24 21:23:10.866483] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.237 [2024-04-24 21:23:10.866548] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.237 [2024-04-24 21:23:10.866565] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.237 [2024-04-24 21:23:10.866579] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.237 [2024-04-24 21:23:10.866591] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.237 [2024-04-24 21:23:10.866677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.237 [2024-04-24 21:23:10.866683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.495 21:23:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:45.495 21:23:10 -- common/autotest_common.sh@850 -- # return 0 00:08:45.495 21:23:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:45.495 21:23:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:45.495 21:23:10 -- common/autotest_common.sh@10 -- # set +x 00:08:45.495 21:23:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.495 21:23:11 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:45.495 21:23:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:45.495 21:23:11 -- common/autotest_common.sh@10 -- # set +x 00:08:45.495 [2024-04-24 21:23:11.018300] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.495 21:23:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:45.495 21:23:11 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:45.495 21:23:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:45.495 21:23:11 -- common/autotest_common.sh@10 -- # set +x 00:08:45.495 21:23:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:45.495 21:23:11 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:45.495 21:23:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:45.495 21:23:11 -- common/autotest_common.sh@10 -- # set +x 00:08:45.495 [2024-04-24 21:23:11.034580] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.495 21:23:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:45.495 21:23:11 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:45.495 21:23:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:45.495 21:23:11 -- common/autotest_common.sh@10 -- # set +x 00:08:45.495 NULL1 00:08:45.495 21:23:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:45.495 21:23:11 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:45.495 21:23:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:45.495 21:23:11 -- common/autotest_common.sh@10 -- # set +x 00:08:45.495 Delay0 00:08:45.495 21:23:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:45.495 21:23:11 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.495 21:23:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:45.495 21:23:11 -- common/autotest_common.sh@10 -- # set +x 00:08:45.495 21:23:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:45.495 21:23:11 -- target/delete_subsystem.sh@28 -- # perf_pid=2541371 00:08:45.495 21:23:11 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:45.495 21:23:11 -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:45.495 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.495 [2024-04-24 21:23:11.119328] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:47.391 21:23:13 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:47.391 21:23:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:47.391 21:23:13 -- common/autotest_common.sh@10 -- # set +x 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 starting I/O failed: -6 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 starting I/O failed: -6 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 starting I/O failed: -6 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 starting I/O failed: -6 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 starting I/O failed: -6 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 starting I/O failed: -6 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 starting I/O failed: -6 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 starting I/O failed: -6 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 starting I/O failed: -6 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 starting I/O failed: -6 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 starting I/O failed: -6 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 starting I/O failed: -6 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 [2024-04-24 21:23:13.335414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133a880 is same with the state(5) to be set 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 starting I/O failed: -6 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 starting I/O failed: -6 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 starting I/O failed: -6 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 starting I/O failed: -6 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 starting I/O failed: -6 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 starting I/O failed: -6 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Write completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.957 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 starting I/O failed: -6 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 starting I/O failed: -6 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 starting I/O failed: -6 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 starting I/O failed: -6 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 starting I/O failed: -6 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 starting I/O failed: -6 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 [2024-04-24 21:23:13.336581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fed6800c250 is same with the state(5) to be set 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Write completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:47.958 Read completed with error (sct=0, sc=8) 00:08:48.890 [2024-04-24 21:23:14.304471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1359120 is same with the state(5) to be set 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 [2024-04-24 21:23:14.337677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fed6800bf90 is same with the state(5) to be set 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 [2024-04-24 21:23:14.338003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133ad30 is same with the state(5) to be set 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 [2024-04-24 21:23:14.338225] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133aa10 is same with the state(5) to be set 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Write completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 Read completed with error (sct=0, sc=8) 00:08:48.890 [2024-04-24 21:23:14.338504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fed6800c510 is same with the state(5) to be set 00:08:48.890 [2024-04-24 21:23:14.339684] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1359120 (9): Bad file descriptor 00:08:48.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:48.890 21:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:48.890 21:23:14 -- target/delete_subsystem.sh@34 -- # delay=0 00:08:48.890 21:23:14 -- target/delete_subsystem.sh@35 -- # kill -0 2541371 00:08:48.890 21:23:14 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:48.890 Initializing NVMe Controllers 00:08:48.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:48.890 Controller IO queue size 128, less than required. 00:08:48.890 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:48.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:48.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:48.890 Initialization complete. Launching workers. 00:08:48.890 ======================================================== 00:08:48.890 Latency(us) 00:08:48.890 Device Information : IOPS MiB/s Average min max 00:08:48.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.37 0.09 883114.91 826.66 1015612.86 00:08:48.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.37 0.09 883292.85 519.10 1016026.00 00:08:48.890 ======================================================== 00:08:48.890 Total : 352.75 0.17 883203.88 519.10 1016026.00 00:08:48.890 00:08:49.509 21:23:14 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:49.509 21:23:14 -- target/delete_subsystem.sh@35 -- # kill -0 2541371 00:08:49.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2541371) - No such process 00:08:49.509 21:23:14 -- target/delete_subsystem.sh@45 -- # NOT wait 2541371 00:08:49.509 21:23:14 -- common/autotest_common.sh@638 -- # local es=0 00:08:49.509 21:23:14 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 2541371 00:08:49.509 21:23:14 -- common/autotest_common.sh@626 -- # local arg=wait 00:08:49.509 21:23:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:49.509 21:23:14 -- common/autotest_common.sh@630 -- # type -t wait 00:08:49.509 21:23:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:49.509 21:23:14 -- common/autotest_common.sh@641 -- # wait 2541371 00:08:49.509 21:23:14 -- common/autotest_common.sh@641 -- # es=1 00:08:49.509 21:23:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:49.509 21:23:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:49.509 21:23:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:49.509 21:23:14 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:49.509 21:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:49.509 21:23:14 -- common/autotest_common.sh@10 -- # set +x 00:08:49.509 21:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:49.509 21:23:14 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.509 21:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:49.509 21:23:14 -- common/autotest_common.sh@10 -- # set +x 00:08:49.509 [2024-04-24 21:23:14.864028] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.509 21:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:49.509 21:23:14 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.509 21:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:49.509 21:23:14 -- common/autotest_common.sh@10 -- # set +x 00:08:49.509 21:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:49.509 21:23:14 -- target/delete_subsystem.sh@54 -- # perf_pid=2541884 00:08:49.509 21:23:14 -- target/delete_subsystem.sh@56 -- # delay=0 00:08:49.509 21:23:14 -- target/delete_subsystem.sh@57 -- # kill -0 2541884 00:08:49.509 21:23:14 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:49.509 21:23:14 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:49.509 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.509 [2024-04-24 21:23:14.931422] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:49.768 21:23:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:49.768 21:23:15 -- target/delete_subsystem.sh@57 -- # kill -0 2541884 00:08:49.768 21:23:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:50.335 21:23:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:50.335 21:23:15 -- target/delete_subsystem.sh@57 -- # kill -0 2541884 00:08:50.335 21:23:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:50.903 21:23:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:50.903 21:23:16 -- target/delete_subsystem.sh@57 -- # kill -0 2541884 00:08:50.903 21:23:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:51.470 21:23:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:51.470 21:23:16 -- target/delete_subsystem.sh@57 -- # kill -0 2541884 00:08:51.470 21:23:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:51.728 21:23:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:51.728 21:23:17 -- target/delete_subsystem.sh@57 -- # kill -0 2541884 00:08:51.728 21:23:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:52.295 21:23:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:52.295 21:23:17 -- target/delete_subsystem.sh@57 -- # kill -0 2541884 00:08:52.295 21:23:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:52.554 Initializing NVMe Controllers 00:08:52.554 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:52.554 Controller IO queue size 128, less than required. 00:08:52.554 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:52.554 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:52.554 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:52.554 Initialization complete. Launching workers. 00:08:52.554 ======================================================== 00:08:52.554 Latency(us) 00:08:52.554 Device Information : IOPS MiB/s Average min max 00:08:52.554 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004130.28 1000254.37 1014066.89 00:08:52.554 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005491.55 1000309.20 1042926.02 00:08:52.554 ======================================================== 00:08:52.554 Total : 256.00 0.12 1004810.91 1000254.37 1042926.02 00:08:52.554 00:08:52.812 21:23:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:52.812 21:23:18 -- target/delete_subsystem.sh@57 -- # kill -0 2541884 00:08:52.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2541884) - No such process 00:08:52.812 21:23:18 -- target/delete_subsystem.sh@67 -- # wait 2541884 00:08:52.812 21:23:18 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:52.812 21:23:18 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:52.812 21:23:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:52.812 21:23:18 -- nvmf/common.sh@117 -- # sync 00:08:52.812 21:23:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:52.812 21:23:18 -- nvmf/common.sh@120 -- # set +e 00:08:52.812 21:23:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:52.812 21:23:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:52.812 rmmod nvme_tcp 00:08:52.812 rmmod nvme_fabrics 00:08:52.812 rmmod nvme_keyring 00:08:52.812 21:23:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:52.812 21:23:18 -- nvmf/common.sh@124 -- # set -e 00:08:52.812 21:23:18 -- nvmf/common.sh@125 -- # return 0 00:08:52.812 21:23:18 -- nvmf/common.sh@478 -- # '[' -n 2541339 ']' 00:08:52.812 21:23:18 -- nvmf/common.sh@479 -- # killprocess 2541339 00:08:52.812 21:23:18 -- common/autotest_common.sh@936 -- # '[' -z 2541339 ']' 00:08:52.812 21:23:18 -- common/autotest_common.sh@940 -- # kill -0 2541339 00:08:52.812 21:23:18 -- common/autotest_common.sh@941 -- # uname 00:08:52.812 21:23:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:52.812 21:23:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2541339 00:08:52.812 21:23:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:52.812 21:23:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:52.812 21:23:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2541339' 00:08:52.812 killing process with pid 2541339 00:08:52.812 21:23:18 -- common/autotest_common.sh@955 -- # kill 2541339 00:08:52.812 21:23:18 -- common/autotest_common.sh@960 -- # wait 2541339 00:08:53.380 21:23:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:53.380 21:23:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:53.380 21:23:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:53.380 21:23:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:53.380 21:23:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:53.380 21:23:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.380 21:23:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.380 21:23:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.287 21:23:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:55.287 00:08:55.287 real 0m12.551s 00:08:55.287 user 0m28.121s 00:08:55.287 sys 0m3.150s 00:08:55.287 21:23:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:55.287 21:23:20 -- common/autotest_common.sh@10 -- # set +x 00:08:55.287 ************************************ 00:08:55.287 END TEST nvmf_delete_subsystem 00:08:55.287 ************************************ 00:08:55.287 21:23:20 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:08:55.287 21:23:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:55.287 21:23:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.287 21:23:20 -- common/autotest_common.sh@10 -- # set +x 00:08:55.287 ************************************ 00:08:55.287 START TEST nvmf_ns_masking 00:08:55.287 ************************************ 00:08:55.287 21:23:20 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:08:55.545 * Looking for test storage... 00:08:55.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:55.545 21:23:21 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:55.545 21:23:21 -- nvmf/common.sh@7 -- # uname -s 00:08:55.545 21:23:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.545 21:23:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.545 21:23:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.545 21:23:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.545 21:23:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.545 21:23:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.545 21:23:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.545 21:23:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.545 21:23:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.545 21:23:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.545 21:23:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:55.545 21:23:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:55.545 21:23:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.545 21:23:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.545 21:23:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:55.545 21:23:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.545 21:23:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:55.545 21:23:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.545 21:23:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.545 21:23:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.545 21:23:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.545 21:23:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.545 21:23:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.545 21:23:21 -- paths/export.sh@5 -- # export PATH 00:08:55.545 21:23:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.545 21:23:21 -- nvmf/common.sh@47 -- # : 0 00:08:55.545 21:23:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:55.545 21:23:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:55.545 21:23:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.545 21:23:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.545 21:23:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.545 21:23:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:55.545 21:23:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:55.545 21:23:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:55.546 21:23:21 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.546 21:23:21 -- target/ns_masking.sh@11 -- # loops=5 00:08:55.546 21:23:21 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:08:55.546 21:23:21 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:08:55.546 21:23:21 -- target/ns_masking.sh@15 -- # uuidgen 00:08:55.546 21:23:21 -- target/ns_masking.sh@15 -- # HOSTID=b3ee9dac-f613-4de8-be20-591f3aa069f2 00:08:55.546 21:23:21 -- target/ns_masking.sh@44 -- # nvmftestinit 00:08:55.546 21:23:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:55.546 21:23:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.546 21:23:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:55.546 21:23:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:55.546 21:23:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:55.546 21:23:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.546 21:23:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.546 21:23:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.546 21:23:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:55.546 21:23:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:55.546 21:23:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:55.546 21:23:21 -- common/autotest_common.sh@10 -- # set +x 00:08:57.451 21:23:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:57.451 21:23:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:57.451 21:23:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:57.451 21:23:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:57.451 21:23:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:57.451 21:23:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:57.451 21:23:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:57.451 21:23:22 -- nvmf/common.sh@295 -- # net_devs=() 00:08:57.451 21:23:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:57.451 21:23:22 -- nvmf/common.sh@296 -- # e810=() 00:08:57.451 21:23:22 -- nvmf/common.sh@296 -- # local -ga e810 00:08:57.451 21:23:22 -- nvmf/common.sh@297 -- # x722=() 00:08:57.451 21:23:22 -- nvmf/common.sh@297 -- # local -ga x722 00:08:57.451 21:23:22 -- nvmf/common.sh@298 -- # mlx=() 00:08:57.451 21:23:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:57.451 21:23:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.451 21:23:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.451 21:23:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.451 21:23:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.451 21:23:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.451 21:23:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.451 21:23:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.451 21:23:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.451 21:23:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.451 21:23:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.451 21:23:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.451 21:23:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:57.451 21:23:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:57.451 21:23:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:57.451 21:23:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:57.451 21:23:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:57.451 21:23:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:57.451 21:23:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.451 21:23:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:57.451 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:57.451 21:23:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.451 21:23:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.451 21:23:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.451 21:23:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.451 21:23:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.451 21:23:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.451 21:23:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:57.451 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:57.451 21:23:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.451 21:23:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.451 21:23:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.451 21:23:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.451 21:23:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.451 21:23:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:57.451 21:23:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:57.451 21:23:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:57.451 21:23:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.451 21:23:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.451 21:23:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:57.451 21:23:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.451 21:23:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:57.451 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:57.451 21:23:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.451 21:23:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.451 21:23:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.451 21:23:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:57.451 21:23:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.451 21:23:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:57.451 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:57.451 21:23:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.451 21:23:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:57.451 21:23:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:57.451 21:23:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:57.451 21:23:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:57.451 21:23:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:57.451 21:23:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.451 21:23:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.451 21:23:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.451 21:23:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:57.451 21:23:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.451 21:23:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.451 21:23:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:57.451 21:23:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.451 21:23:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.451 21:23:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:57.451 21:23:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:57.451 21:23:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.451 21:23:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.451 21:23:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.451 21:23:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.451 21:23:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:57.451 21:23:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.451 21:23:23 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.451 21:23:23 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.451 21:23:23 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:57.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:08:57.452 00:08:57.452 --- 10.0.0.2 ping statistics --- 00:08:57.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.452 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:08:57.452 21:23:23 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:08:57.452 00:08:57.452 --- 10.0.0.1 ping statistics --- 00:08:57.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.452 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:08:57.452 21:23:23 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.452 21:23:23 -- nvmf/common.sh@411 -- # return 0 00:08:57.452 21:23:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:57.452 21:23:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.452 21:23:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:57.452 21:23:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:57.452 21:23:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.452 21:23:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:57.452 21:23:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:57.452 21:23:23 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:08:57.452 21:23:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:57.452 21:23:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:57.452 21:23:23 -- common/autotest_common.sh@10 -- # set +x 00:08:57.452 21:23:23 -- nvmf/common.sh@470 -- # nvmfpid=2544241 00:08:57.452 21:23:23 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:57.452 21:23:23 -- nvmf/common.sh@471 -- # waitforlisten 2544241 00:08:57.452 21:23:23 -- common/autotest_common.sh@817 -- # '[' -z 2544241 ']' 00:08:57.452 21:23:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.452 21:23:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:57.452 21:23:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.452 21:23:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:57.452 21:23:23 -- common/autotest_common.sh@10 -- # set +x 00:08:57.711 [2024-04-24 21:23:23.133800] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:57.711 [2024-04-24 21:23:23.133879] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.711 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.711 [2024-04-24 21:23:23.205859] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.711 [2024-04-24 21:23:23.326086] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.711 [2024-04-24 21:23:23.326151] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.711 [2024-04-24 21:23:23.326166] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.711 [2024-04-24 21:23:23.326180] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.711 [2024-04-24 21:23:23.326192] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.711 [2024-04-24 21:23:23.326264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.711 [2024-04-24 21:23:23.326317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.711 [2024-04-24 21:23:23.326349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.711 [2024-04-24 21:23:23.326352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.026 21:23:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:58.026 21:23:23 -- common/autotest_common.sh@850 -- # return 0 00:08:58.026 21:23:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:58.026 21:23:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:58.026 21:23:23 -- common/autotest_common.sh@10 -- # set +x 00:08:58.026 21:23:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.026 21:23:23 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:58.284 [2024-04-24 21:23:23.749458] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.284 21:23:23 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:08:58.284 21:23:23 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:08:58.284 21:23:23 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:08:58.541 Malloc1 00:08:58.541 21:23:24 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:08:58.799 Malloc2 00:08:58.799 21:23:24 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:59.057 21:23:24 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:08:59.315 21:23:24 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.573 [2024-04-24 21:23:25.054065] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.573 21:23:25 -- target/ns_masking.sh@61 -- # connect 00:08:59.573 21:23:25 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b3ee9dac-f613-4de8-be20-591f3aa069f2 -a 10.0.0.2 -s 4420 -i 4 00:08:59.833 21:23:25 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:08:59.833 21:23:25 -- common/autotest_common.sh@1184 -- # local i=0 00:08:59.833 21:23:25 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:59.833 21:23:25 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:59.833 21:23:25 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:01.738 21:23:27 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:01.738 21:23:27 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:01.738 21:23:27 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:01.738 21:23:27 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:01.738 21:23:27 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:01.738 21:23:27 -- common/autotest_common.sh@1194 -- # return 0 00:09:01.738 21:23:27 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:01.738 21:23:27 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:01.738 21:23:27 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:01.738 21:23:27 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:01.738 21:23:27 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:09:01.738 21:23:27 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:01.738 21:23:27 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:01.738 [ 0]:0x1 00:09:01.738 21:23:27 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:01.738 21:23:27 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:01.738 21:23:27 -- target/ns_masking.sh@40 -- # nguid=856d7671d0ed40ccbd791c2873044561 00:09:01.738 21:23:27 -- target/ns_masking.sh@41 -- # [[ 856d7671d0ed40ccbd791c2873044561 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:01.738 21:23:27 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:01.995 21:23:27 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:09:01.995 21:23:27 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:01.995 21:23:27 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:01.995 [ 0]:0x1 00:09:01.995 21:23:27 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:01.995 21:23:27 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:02.253 21:23:27 -- target/ns_masking.sh@40 -- # nguid=856d7671d0ed40ccbd791c2873044561 00:09:02.253 21:23:27 -- target/ns_masking.sh@41 -- # [[ 856d7671d0ed40ccbd791c2873044561 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:02.253 21:23:27 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:09:02.253 21:23:27 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:02.253 21:23:27 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:02.253 [ 1]:0x2 00:09:02.253 21:23:27 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:02.253 21:23:27 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:02.253 21:23:27 -- target/ns_masking.sh@40 -- # nguid=ee7acc9e69654bd68d43b41596868821 00:09:02.253 21:23:27 -- target/ns_masking.sh@41 -- # [[ ee7acc9e69654bd68d43b41596868821 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:02.253 21:23:27 -- target/ns_masking.sh@69 -- # disconnect 00:09:02.253 21:23:27 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:02.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.253 21:23:27 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.510 21:23:28 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:02.768 21:23:28 -- target/ns_masking.sh@77 -- # connect 1 00:09:02.768 21:23:28 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b3ee9dac-f613-4de8-be20-591f3aa069f2 -a 10.0.0.2 -s 4420 -i 4 00:09:03.027 21:23:28 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:03.027 21:23:28 -- common/autotest_common.sh@1184 -- # local i=0 00:09:03.027 21:23:28 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:03.027 21:23:28 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:09:03.027 21:23:28 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:09:03.027 21:23:28 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:04.979 21:23:30 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:04.979 21:23:30 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:04.979 21:23:30 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:04.979 21:23:30 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:04.979 21:23:30 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:04.979 21:23:30 -- common/autotest_common.sh@1194 -- # return 0 00:09:04.979 21:23:30 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:04.979 21:23:30 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:04.979 21:23:30 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:04.979 21:23:30 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:04.979 21:23:30 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:09:04.979 21:23:30 -- common/autotest_common.sh@638 -- # local es=0 00:09:04.979 21:23:30 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:04.979 21:23:30 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:04.979 21:23:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:04.979 21:23:30 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:04.979 21:23:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:04.979 21:23:30 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:04.980 21:23:30 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:04.980 21:23:30 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:04.980 21:23:30 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:04.980 21:23:30 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:05.238 21:23:30 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:05.238 21:23:30 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:05.238 21:23:30 -- common/autotest_common.sh@641 -- # es=1 00:09:05.238 21:23:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:05.238 21:23:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:05.238 21:23:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:05.238 21:23:30 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:09:05.238 21:23:30 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:05.238 21:23:30 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:05.238 [ 0]:0x2 00:09:05.238 21:23:30 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:05.238 21:23:30 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:05.238 21:23:30 -- target/ns_masking.sh@40 -- # nguid=ee7acc9e69654bd68d43b41596868821 00:09:05.238 21:23:30 -- target/ns_masking.sh@41 -- # [[ ee7acc9e69654bd68d43b41596868821 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:05.238 21:23:30 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:05.496 21:23:31 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:09:05.496 21:23:31 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:05.496 21:23:31 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:05.496 [ 0]:0x1 00:09:05.496 21:23:31 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:05.496 21:23:31 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:05.496 21:23:31 -- target/ns_masking.sh@40 -- # nguid=856d7671d0ed40ccbd791c2873044561 00:09:05.496 21:23:31 -- target/ns_masking.sh@41 -- # [[ 856d7671d0ed40ccbd791c2873044561 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:05.496 21:23:31 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:09:05.496 21:23:31 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:05.496 21:23:31 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:05.496 [ 1]:0x2 00:09:05.496 21:23:31 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:05.496 21:23:31 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:05.496 21:23:31 -- target/ns_masking.sh@40 -- # nguid=ee7acc9e69654bd68d43b41596868821 00:09:05.496 21:23:31 -- target/ns_masking.sh@41 -- # [[ ee7acc9e69654bd68d43b41596868821 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:05.496 21:23:31 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:05.753 21:23:31 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:09:05.753 21:23:31 -- common/autotest_common.sh@638 -- # local es=0 00:09:05.753 21:23:31 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:05.753 21:23:31 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:05.753 21:23:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:05.753 21:23:31 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:05.753 21:23:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:05.753 21:23:31 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:05.753 21:23:31 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:05.753 21:23:31 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:05.753 21:23:31 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:05.753 21:23:31 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:06.011 21:23:31 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:06.011 21:23:31 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:06.011 21:23:31 -- common/autotest_common.sh@641 -- # es=1 00:09:06.011 21:23:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:06.011 21:23:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:06.011 21:23:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:06.011 21:23:31 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:09:06.011 21:23:31 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:06.011 21:23:31 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:06.011 [ 0]:0x2 00:09:06.011 21:23:31 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:06.011 21:23:31 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:06.011 21:23:31 -- target/ns_masking.sh@40 -- # nguid=ee7acc9e69654bd68d43b41596868821 00:09:06.011 21:23:31 -- target/ns_masking.sh@41 -- # [[ ee7acc9e69654bd68d43b41596868821 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:06.011 21:23:31 -- target/ns_masking.sh@91 -- # disconnect 00:09:06.011 21:23:31 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:06.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.011 21:23:31 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:06.269 21:23:31 -- target/ns_masking.sh@95 -- # connect 2 00:09:06.269 21:23:31 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b3ee9dac-f613-4de8-be20-591f3aa069f2 -a 10.0.0.2 -s 4420 -i 4 00:09:06.269 21:23:31 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:06.269 21:23:31 -- common/autotest_common.sh@1184 -- # local i=0 00:09:06.269 21:23:31 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:06.269 21:23:31 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:09:06.269 21:23:31 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:09:06.269 21:23:31 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:08.801 21:23:33 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:08.801 21:23:33 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:08.801 21:23:33 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:08.801 21:23:33 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:09:08.801 21:23:33 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:08.801 21:23:33 -- common/autotest_common.sh@1194 -- # return 0 00:09:08.801 21:23:33 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:08.801 21:23:33 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:08.801 21:23:33 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:08.801 21:23:33 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:08.801 21:23:33 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:09:08.801 21:23:33 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:08.801 21:23:33 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:08.801 [ 0]:0x1 00:09:08.801 21:23:33 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:08.801 21:23:33 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:08.801 21:23:33 -- target/ns_masking.sh@40 -- # nguid=856d7671d0ed40ccbd791c2873044561 00:09:08.801 21:23:33 -- target/ns_masking.sh@41 -- # [[ 856d7671d0ed40ccbd791c2873044561 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:08.801 21:23:33 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:09:08.801 21:23:33 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:08.801 21:23:33 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:08.801 [ 1]:0x2 00:09:08.801 21:23:33 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:08.801 21:23:33 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:08.801 21:23:34 -- target/ns_masking.sh@40 -- # nguid=ee7acc9e69654bd68d43b41596868821 00:09:08.801 21:23:34 -- target/ns_masking.sh@41 -- # [[ ee7acc9e69654bd68d43b41596868821 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:08.801 21:23:34 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:08.801 21:23:34 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:09:08.801 21:23:34 -- common/autotest_common.sh@638 -- # local es=0 00:09:08.801 21:23:34 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:08.801 21:23:34 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:08.801 21:23:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:08.801 21:23:34 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:08.801 21:23:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:08.801 21:23:34 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:08.801 21:23:34 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:08.801 21:23:34 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:08.801 21:23:34 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:08.801 21:23:34 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:08.801 21:23:34 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:08.801 21:23:34 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:08.801 21:23:34 -- common/autotest_common.sh@641 -- # es=1 00:09:08.801 21:23:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:08.801 21:23:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:08.801 21:23:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:08.801 21:23:34 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:09:08.801 21:23:34 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:08.801 21:23:34 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:08.801 [ 0]:0x2 00:09:08.801 21:23:34 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:08.801 21:23:34 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:08.801 21:23:34 -- target/ns_masking.sh@40 -- # nguid=ee7acc9e69654bd68d43b41596868821 00:09:08.801 21:23:34 -- target/ns_masking.sh@41 -- # [[ ee7acc9e69654bd68d43b41596868821 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:08.801 21:23:34 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:08.801 21:23:34 -- common/autotest_common.sh@638 -- # local es=0 00:09:08.801 21:23:34 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:08.801 21:23:34 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.801 21:23:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:08.801 21:23:34 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.801 21:23:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:08.801 21:23:34 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.801 21:23:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:08.801 21:23:34 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.801 21:23:34 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:08.801 21:23:34 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:09.059 [2024-04-24 21:23:34.581372] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:09.059 request: 00:09:09.059 { 00:09:09.059 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:09.059 "nsid": 2, 00:09:09.059 "host": "nqn.2016-06.io.spdk:host1", 00:09:09.059 "method": "nvmf_ns_remove_host", 00:09:09.059 "req_id": 1 00:09:09.059 } 00:09:09.059 Got JSON-RPC error response 00:09:09.059 response: 00:09:09.059 { 00:09:09.059 "code": -32602, 00:09:09.059 "message": "Invalid parameters" 00:09:09.059 } 00:09:09.059 21:23:34 -- common/autotest_common.sh@641 -- # es=1 00:09:09.059 21:23:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:09.059 21:23:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:09.059 21:23:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:09.059 21:23:34 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:09:09.059 21:23:34 -- common/autotest_common.sh@638 -- # local es=0 00:09:09.059 21:23:34 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:09.059 21:23:34 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:09.059 21:23:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:09.059 21:23:34 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:09.059 21:23:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:09.059 21:23:34 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:09.059 21:23:34 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:09.059 21:23:34 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:09.059 21:23:34 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:09.059 21:23:34 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:09.059 21:23:34 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:09.059 21:23:34 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:09.059 21:23:34 -- common/autotest_common.sh@641 -- # es=1 00:09:09.059 21:23:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:09.059 21:23:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:09.059 21:23:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:09.059 21:23:34 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:09:09.059 21:23:34 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:09.059 21:23:34 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:09.059 [ 0]:0x2 00:09:09.059 21:23:34 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:09.059 21:23:34 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:09.317 21:23:34 -- target/ns_masking.sh@40 -- # nguid=ee7acc9e69654bd68d43b41596868821 00:09:09.317 21:23:34 -- target/ns_masking.sh@41 -- # [[ ee7acc9e69654bd68d43b41596868821 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:09.317 21:23:34 -- target/ns_masking.sh@108 -- # disconnect 00:09:09.317 21:23:34 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:09.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.317 21:23:34 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:09.575 21:23:35 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:09:09.575 21:23:35 -- target/ns_masking.sh@114 -- # nvmftestfini 00:09:09.575 21:23:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:09.575 21:23:35 -- nvmf/common.sh@117 -- # sync 00:09:09.575 21:23:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:09.575 21:23:35 -- nvmf/common.sh@120 -- # set +e 00:09:09.575 21:23:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:09.575 21:23:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:09.575 rmmod nvme_tcp 00:09:09.575 rmmod nvme_fabrics 00:09:09.575 rmmod nvme_keyring 00:09:09.575 21:23:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:09.575 21:23:35 -- nvmf/common.sh@124 -- # set -e 00:09:09.575 21:23:35 -- nvmf/common.sh@125 -- # return 0 00:09:09.575 21:23:35 -- nvmf/common.sh@478 -- # '[' -n 2544241 ']' 00:09:09.575 21:23:35 -- nvmf/common.sh@479 -- # killprocess 2544241 00:09:09.575 21:23:35 -- common/autotest_common.sh@936 -- # '[' -z 2544241 ']' 00:09:09.575 21:23:35 -- common/autotest_common.sh@940 -- # kill -0 2544241 00:09:09.575 21:23:35 -- common/autotest_common.sh@941 -- # uname 00:09:09.575 21:23:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:09.575 21:23:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2544241 00:09:09.575 21:23:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:09.575 21:23:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:09.575 21:23:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2544241' 00:09:09.575 killing process with pid 2544241 00:09:09.575 21:23:35 -- common/autotest_common.sh@955 -- # kill 2544241 00:09:09.575 21:23:35 -- common/autotest_common.sh@960 -- # wait 2544241 00:09:10.144 21:23:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:10.144 21:23:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:10.144 21:23:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:10.144 21:23:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:10.144 21:23:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:10.144 21:23:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.144 21:23:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:10.144 21:23:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.046 21:23:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:12.046 00:09:12.046 real 0m16.649s 00:09:12.046 user 0m52.018s 00:09:12.046 sys 0m3.751s 00:09:12.046 21:23:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:12.046 21:23:37 -- common/autotest_common.sh@10 -- # set +x 00:09:12.046 ************************************ 00:09:12.046 END TEST nvmf_ns_masking 00:09:12.046 ************************************ 00:09:12.046 21:23:37 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:12.046 21:23:37 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:12.046 21:23:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:12.046 21:23:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:12.046 21:23:37 -- common/autotest_common.sh@10 -- # set +x 00:09:12.046 ************************************ 00:09:12.046 START TEST nvmf_nvme_cli 00:09:12.046 ************************************ 00:09:12.046 21:23:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:12.305 * Looking for test storage... 00:09:12.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:12.305 21:23:37 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:12.305 21:23:37 -- nvmf/common.sh@7 -- # uname -s 00:09:12.305 21:23:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.305 21:23:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.305 21:23:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.305 21:23:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.305 21:23:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.305 21:23:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.305 21:23:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.305 21:23:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.305 21:23:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.305 21:23:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.305 21:23:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:12.305 21:23:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:12.305 21:23:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.305 21:23:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.305 21:23:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:12.305 21:23:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.305 21:23:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:12.305 21:23:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.305 21:23:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.305 21:23:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.305 21:23:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.305 21:23:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.305 21:23:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.305 21:23:37 -- paths/export.sh@5 -- # export PATH 00:09:12.305 21:23:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.305 21:23:37 -- nvmf/common.sh@47 -- # : 0 00:09:12.305 21:23:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:12.305 21:23:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:12.305 21:23:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.305 21:23:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.305 21:23:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.305 21:23:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:12.305 21:23:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:12.305 21:23:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:12.305 21:23:37 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:12.305 21:23:37 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:12.305 21:23:37 -- target/nvme_cli.sh@14 -- # devs=() 00:09:12.305 21:23:37 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:12.305 21:23:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:12.305 21:23:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.305 21:23:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:12.305 21:23:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:12.305 21:23:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:12.305 21:23:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.305 21:23:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.305 21:23:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.305 21:23:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:12.305 21:23:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:12.305 21:23:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:12.305 21:23:37 -- common/autotest_common.sh@10 -- # set +x 00:09:14.207 21:23:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:14.207 21:23:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:14.207 21:23:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:14.207 21:23:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:14.207 21:23:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:14.207 21:23:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:14.207 21:23:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:14.207 21:23:39 -- nvmf/common.sh@295 -- # net_devs=() 00:09:14.207 21:23:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:14.207 21:23:39 -- nvmf/common.sh@296 -- # e810=() 00:09:14.207 21:23:39 -- nvmf/common.sh@296 -- # local -ga e810 00:09:14.207 21:23:39 -- nvmf/common.sh@297 -- # x722=() 00:09:14.207 21:23:39 -- nvmf/common.sh@297 -- # local -ga x722 00:09:14.207 21:23:39 -- nvmf/common.sh@298 -- # mlx=() 00:09:14.207 21:23:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:14.207 21:23:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.207 21:23:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.207 21:23:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.207 21:23:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.207 21:23:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.207 21:23:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.207 21:23:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.207 21:23:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.207 21:23:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.207 21:23:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.207 21:23:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.207 21:23:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:14.207 21:23:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:14.207 21:23:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:14.207 21:23:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:14.207 21:23:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:14.207 21:23:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:14.207 21:23:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.207 21:23:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:14.207 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:14.207 21:23:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.207 21:23:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.207 21:23:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.207 21:23:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.207 21:23:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.207 21:23:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.207 21:23:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:14.207 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:14.207 21:23:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.207 21:23:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.207 21:23:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.207 21:23:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.207 21:23:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.207 21:23:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:14.207 21:23:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:14.207 21:23:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:14.207 21:23:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.207 21:23:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.208 21:23:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:14.208 21:23:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.208 21:23:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:14.208 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:14.208 21:23:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.208 21:23:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.208 21:23:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.208 21:23:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:14.208 21:23:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.208 21:23:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:14.208 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:14.208 21:23:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.208 21:23:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:14.208 21:23:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:14.208 21:23:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:14.208 21:23:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:14.208 21:23:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:14.208 21:23:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.208 21:23:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.208 21:23:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.208 21:23:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:14.208 21:23:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.208 21:23:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.208 21:23:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:14.208 21:23:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.208 21:23:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.208 21:23:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:14.208 21:23:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:14.208 21:23:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.208 21:23:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.208 21:23:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.208 21:23:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.208 21:23:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:14.208 21:23:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.466 21:23:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.466 21:23:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.466 21:23:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:14.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:09:14.466 00:09:14.466 --- 10.0.0.2 ping statistics --- 00:09:14.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.466 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:09:14.466 21:23:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:09:14.466 00:09:14.466 --- 10.0.0.1 ping statistics --- 00:09:14.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.466 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:09:14.466 21:23:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.466 21:23:39 -- nvmf/common.sh@411 -- # return 0 00:09:14.466 21:23:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:14.466 21:23:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.466 21:23:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:14.466 21:23:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:14.466 21:23:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.466 21:23:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:14.466 21:23:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:14.466 21:23:39 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:14.466 21:23:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:14.466 21:23:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:14.466 21:23:39 -- common/autotest_common.sh@10 -- # set +x 00:09:14.466 21:23:39 -- nvmf/common.sh@470 -- # nvmfpid=2547801 00:09:14.466 21:23:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:14.466 21:23:39 -- nvmf/common.sh@471 -- # waitforlisten 2547801 00:09:14.466 21:23:39 -- common/autotest_common.sh@817 -- # '[' -z 2547801 ']' 00:09:14.466 21:23:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.466 21:23:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:14.466 21:23:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.466 21:23:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:14.466 21:23:39 -- common/autotest_common.sh@10 -- # set +x 00:09:14.466 [2024-04-24 21:23:40.023194] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:09:14.466 [2024-04-24 21:23:40.023287] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.466 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.466 [2024-04-24 21:23:40.096773] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.724 [2024-04-24 21:23:40.216661] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.724 [2024-04-24 21:23:40.216729] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.724 [2024-04-24 21:23:40.216743] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.724 [2024-04-24 21:23:40.216756] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.724 [2024-04-24 21:23:40.216766] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.724 [2024-04-24 21:23:40.216850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.724 [2024-04-24 21:23:40.216921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.724 [2024-04-24 21:23:40.216891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.724 [2024-04-24 21:23:40.216924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.655 21:23:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:15.655 21:23:41 -- common/autotest_common.sh@850 -- # return 0 00:09:15.655 21:23:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:15.655 21:23:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:15.655 21:23:41 -- common/autotest_common.sh@10 -- # set +x 00:09:15.655 21:23:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.655 21:23:41 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:15.655 21:23:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.655 21:23:41 -- common/autotest_common.sh@10 -- # set +x 00:09:15.655 [2024-04-24 21:23:41.034676] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.655 21:23:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.655 21:23:41 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:15.655 21:23:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.655 21:23:41 -- common/autotest_common.sh@10 -- # set +x 00:09:15.655 Malloc0 00:09:15.655 21:23:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.655 21:23:41 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:15.655 21:23:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.655 21:23:41 -- common/autotest_common.sh@10 -- # set +x 00:09:15.655 Malloc1 00:09:15.655 21:23:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.656 21:23:41 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:15.656 21:23:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.656 21:23:41 -- common/autotest_common.sh@10 -- # set +x 00:09:15.656 21:23:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.656 21:23:41 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:15.656 21:23:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.656 21:23:41 -- common/autotest_common.sh@10 -- # set +x 00:09:15.656 21:23:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.656 21:23:41 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:15.656 21:23:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.656 21:23:41 -- common/autotest_common.sh@10 -- # set +x 00:09:15.656 21:23:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.656 21:23:41 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.656 21:23:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.656 21:23:41 -- common/autotest_common.sh@10 -- # set +x 00:09:15.656 [2024-04-24 21:23:41.121194] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.656 21:23:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.656 21:23:41 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:15.656 21:23:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.656 21:23:41 -- common/autotest_common.sh@10 -- # set +x 00:09:15.656 21:23:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.656 21:23:41 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:15.656 00:09:15.656 Discovery Log Number of Records 2, Generation counter 2 00:09:15.656 =====Discovery Log Entry 0====== 00:09:15.656 trtype: tcp 00:09:15.656 adrfam: ipv4 00:09:15.656 subtype: current discovery subsystem 00:09:15.656 treq: not required 00:09:15.656 portid: 0 00:09:15.656 trsvcid: 4420 00:09:15.656 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:15.656 traddr: 10.0.0.2 00:09:15.656 eflags: explicit discovery connections, duplicate discovery information 00:09:15.656 sectype: none 00:09:15.656 =====Discovery Log Entry 1====== 00:09:15.656 trtype: tcp 00:09:15.656 adrfam: ipv4 00:09:15.656 subtype: nvme subsystem 00:09:15.656 treq: not required 00:09:15.656 portid: 0 00:09:15.656 trsvcid: 4420 00:09:15.656 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:15.656 traddr: 10.0.0.2 00:09:15.656 eflags: none 00:09:15.656 sectype: none 00:09:15.656 21:23:41 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:15.656 21:23:41 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:15.656 21:23:41 -- nvmf/common.sh@511 -- # local dev _ 00:09:15.656 21:23:41 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:15.656 21:23:41 -- nvmf/common.sh@510 -- # nvme list 00:09:15.656 21:23:41 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:09:15.656 21:23:41 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:15.656 21:23:41 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:09:15.656 21:23:41 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:15.656 21:23:41 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:15.656 21:23:41 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:16.221 21:23:41 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:16.221 21:23:41 -- common/autotest_common.sh@1184 -- # local i=0 00:09:16.221 21:23:41 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:16.221 21:23:41 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:09:16.221 21:23:41 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:09:16.221 21:23:41 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:18.746 21:23:43 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:18.746 21:23:43 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:18.746 21:23:43 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:18.746 21:23:43 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:09:18.746 21:23:43 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:18.746 21:23:43 -- common/autotest_common.sh@1194 -- # return 0 00:09:18.746 21:23:43 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:18.746 21:23:43 -- nvmf/common.sh@511 -- # local dev _ 00:09:18.746 21:23:43 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:18.746 21:23:43 -- nvmf/common.sh@510 -- # nvme list 00:09:18.746 21:23:43 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:09:18.746 21:23:43 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:18.746 21:23:43 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:09:18.746 21:23:43 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:18.746 21:23:43 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:18.746 21:23:43 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:09:18.746 21:23:43 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:18.746 21:23:43 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:18.746 21:23:43 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:09:18.746 21:23:43 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:18.746 21:23:43 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:18.746 /dev/nvme0n1 ]] 00:09:18.746 21:23:43 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:18.746 21:23:43 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:18.746 21:23:43 -- nvmf/common.sh@511 -- # local dev _ 00:09:18.746 21:23:43 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:18.746 21:23:43 -- nvmf/common.sh@510 -- # nvme list 00:09:18.746 21:23:44 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:09:18.746 21:23:44 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:18.746 21:23:44 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:09:18.746 21:23:44 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:18.746 21:23:44 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:18.746 21:23:44 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:09:18.746 21:23:44 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:18.746 21:23:44 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:18.746 21:23:44 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:09:18.746 21:23:44 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:18.746 21:23:44 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:18.746 21:23:44 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:18.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.746 21:23:44 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:18.746 21:23:44 -- common/autotest_common.sh@1205 -- # local i=0 00:09:18.746 21:23:44 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:18.746 21:23:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.746 21:23:44 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:18.746 21:23:44 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.746 21:23:44 -- common/autotest_common.sh@1217 -- # return 0 00:09:18.746 21:23:44 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:18.746 21:23:44 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:18.746 21:23:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:18.746 21:23:44 -- common/autotest_common.sh@10 -- # set +x 00:09:18.746 21:23:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:18.746 21:23:44 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:18.746 21:23:44 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:18.746 21:23:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:18.746 21:23:44 -- nvmf/common.sh@117 -- # sync 00:09:18.746 21:23:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:18.746 21:23:44 -- nvmf/common.sh@120 -- # set +e 00:09:18.746 21:23:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:18.746 21:23:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:18.746 rmmod nvme_tcp 00:09:18.746 rmmod nvme_fabrics 00:09:18.746 rmmod nvme_keyring 00:09:19.004 21:23:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.004 21:23:44 -- nvmf/common.sh@124 -- # set -e 00:09:19.004 21:23:44 -- nvmf/common.sh@125 -- # return 0 00:09:19.004 21:23:44 -- nvmf/common.sh@478 -- # '[' -n 2547801 ']' 00:09:19.004 21:23:44 -- nvmf/common.sh@479 -- # killprocess 2547801 00:09:19.004 21:23:44 -- common/autotest_common.sh@936 -- # '[' -z 2547801 ']' 00:09:19.004 21:23:44 -- common/autotest_common.sh@940 -- # kill -0 2547801 00:09:19.004 21:23:44 -- common/autotest_common.sh@941 -- # uname 00:09:19.004 21:23:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:19.004 21:23:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2547801 00:09:19.004 21:23:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:19.004 21:23:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:19.004 21:23:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2547801' 00:09:19.004 killing process with pid 2547801 00:09:19.004 21:23:44 -- common/autotest_common.sh@955 -- # kill 2547801 00:09:19.004 21:23:44 -- common/autotest_common.sh@960 -- # wait 2547801 00:09:19.262 21:23:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:19.262 21:23:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:19.262 21:23:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:19.262 21:23:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:19.262 21:23:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:19.262 21:23:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.262 21:23:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.262 21:23:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.163 21:23:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:21.163 00:09:21.163 real 0m9.107s 00:09:21.163 user 0m18.709s 00:09:21.163 sys 0m2.289s 00:09:21.163 21:23:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:21.163 21:23:46 -- common/autotest_common.sh@10 -- # set +x 00:09:21.163 ************************************ 00:09:21.163 END TEST nvmf_nvme_cli 00:09:21.163 ************************************ 00:09:21.459 21:23:46 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:21.459 21:23:46 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:21.459 21:23:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:21.459 21:23:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.459 21:23:46 -- common/autotest_common.sh@10 -- # set +x 00:09:21.459 ************************************ 00:09:21.459 START TEST nvmf_vfio_user 00:09:21.459 ************************************ 00:09:21.459 21:23:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:21.459 * Looking for test storage... 00:09:21.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.459 21:23:47 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.459 21:23:47 -- nvmf/common.sh@7 -- # uname -s 00:09:21.459 21:23:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.459 21:23:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.459 21:23:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.459 21:23:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.459 21:23:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.459 21:23:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.459 21:23:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.459 21:23:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.459 21:23:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.459 21:23:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.459 21:23:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:21.459 21:23:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:21.459 21:23:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.459 21:23:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.459 21:23:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.459 21:23:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.459 21:23:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.459 21:23:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.459 21:23:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.459 21:23:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.459 21:23:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.459 21:23:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.459 21:23:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.459 21:23:47 -- paths/export.sh@5 -- # export PATH 00:09:21.459 21:23:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.459 21:23:47 -- nvmf/common.sh@47 -- # : 0 00:09:21.459 21:23:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:21.459 21:23:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:21.459 21:23:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.459 21:23:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.459 21:23:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.459 21:23:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:21.459 21:23:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:21.459 21:23:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:21.459 21:23:47 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:21.459 21:23:47 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:21.459 21:23:47 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:21.459 21:23:47 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:21.459 21:23:47 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:21.459 21:23:47 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:21.459 21:23:47 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:21.459 21:23:47 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:21.459 21:23:47 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:21.459 21:23:47 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:21.459 21:23:47 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2548746 00:09:21.459 21:23:47 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:21.459 21:23:47 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2548746' 00:09:21.459 Process pid: 2548746 00:09:21.459 21:23:47 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:21.459 21:23:47 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2548746 00:09:21.459 21:23:47 -- common/autotest_common.sh@817 -- # '[' -z 2548746 ']' 00:09:21.459 21:23:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.459 21:23:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:21.459 21:23:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.459 21:23:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:21.459 21:23:47 -- common/autotest_common.sh@10 -- # set +x 00:09:21.459 [2024-04-24 21:23:47.082898] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:09:21.459 [2024-04-24 21:23:47.082988] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.459 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.717 [2024-04-24 21:23:47.149472] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:21.717 [2024-04-24 21:23:47.259209] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.717 [2024-04-24 21:23:47.259259] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.717 [2024-04-24 21:23:47.259289] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.717 [2024-04-24 21:23:47.259301] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.717 [2024-04-24 21:23:47.259311] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.717 [2024-04-24 21:23:47.259371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.717 [2024-04-24 21:23:47.259435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.717 [2024-04-24 21:23:47.259463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.717 [2024-04-24 21:23:47.259465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.717 21:23:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:21.717 21:23:47 -- common/autotest_common.sh@850 -- # return 0 00:09:21.717 21:23:47 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:23.095 21:23:48 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:23.095 21:23:48 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:23.095 21:23:48 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:23.095 21:23:48 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:23.095 21:23:48 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:23.095 21:23:48 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:23.353 Malloc1 00:09:23.353 21:23:48 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:23.610 21:23:49 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:23.868 21:23:49 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:24.126 21:23:49 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:24.126 21:23:49 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:24.126 21:23:49 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:24.393 Malloc2 00:09:24.394 21:23:49 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:24.651 21:23:50 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:24.909 21:23:50 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:25.171 21:23:50 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:25.171 21:23:50 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:25.171 21:23:50 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:25.171 21:23:50 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:25.171 21:23:50 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:25.171 21:23:50 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:25.171 [2024-04-24 21:23:50.673800] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:09:25.171 [2024-04-24 21:23:50.673845] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2549174 ] 00:09:25.171 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.171 [2024-04-24 21:23:50.707138] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:25.171 [2024-04-24 21:23:50.712685] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:25.171 [2024-04-24 21:23:50.712714] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5b6f726000 00:09:25.171 [2024-04-24 21:23:50.713663] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:25.171 [2024-04-24 21:23:50.714658] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:25.171 [2024-04-24 21:23:50.715678] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:25.171 [2024-04-24 21:23:50.716689] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:25.171 [2024-04-24 21:23:50.717683] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:25.171 [2024-04-24 21:23:50.718708] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:25.171 [2024-04-24 21:23:50.719694] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:25.171 [2024-04-24 21:23:50.720699] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:25.171 [2024-04-24 21:23:50.721709] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:25.171 [2024-04-24 21:23:50.721733] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5b6f71b000 00:09:25.171 [2024-04-24 21:23:50.722857] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:25.171 [2024-04-24 21:23:50.734585] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:25.171 [2024-04-24 21:23:50.734621] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:25.171 [2024-04-24 21:23:50.740827] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:25.171 [2024-04-24 21:23:50.740882] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:25.171 [2024-04-24 21:23:50.740987] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:25.171 [2024-04-24 21:23:50.741016] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:25.171 [2024-04-24 21:23:50.741025] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:25.171 [2024-04-24 21:23:50.741815] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:25.171 [2024-04-24 21:23:50.741835] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:25.171 [2024-04-24 21:23:50.741848] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:25.171 [2024-04-24 21:23:50.742825] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:25.171 [2024-04-24 21:23:50.742845] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:25.171 [2024-04-24 21:23:50.742858] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:25.171 [2024-04-24 21:23:50.743831] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:25.172 [2024-04-24 21:23:50.743849] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:25.172 [2024-04-24 21:23:50.746641] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:25.172 [2024-04-24 21:23:50.746659] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:25.172 [2024-04-24 21:23:50.746669] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:25.172 [2024-04-24 21:23:50.746680] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:25.172 [2024-04-24 21:23:50.746791] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:25.172 [2024-04-24 21:23:50.746803] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:25.172 [2024-04-24 21:23:50.746812] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:25.172 [2024-04-24 21:23:50.747855] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:25.172 [2024-04-24 21:23:50.748855] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:25.172 [2024-04-24 21:23:50.749863] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:25.172 [2024-04-24 21:23:50.750855] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:25.172 [2024-04-24 21:23:50.750966] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:25.172 [2024-04-24 21:23:50.751876] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:25.172 [2024-04-24 21:23:50.751895] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:25.172 [2024-04-24 21:23:50.751905] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:25.172 [2024-04-24 21:23:50.751929] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:25.172 [2024-04-24 21:23:50.751958] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:25.172 [2024-04-24 21:23:50.751984] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:25.172 [2024-04-24 21:23:50.751993] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:25.172 [2024-04-24 21:23:50.752012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:25.172 [2024-04-24 21:23:50.752083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:25.172 [2024-04-24 21:23:50.752099] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:25.172 [2024-04-24 21:23:50.752107] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:25.172 [2024-04-24 21:23:50.752115] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:25.172 [2024-04-24 21:23:50.752122] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:25.172 [2024-04-24 21:23:50.752130] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:25.172 [2024-04-24 21:23:50.752137] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:25.172 [2024-04-24 21:23:50.752145] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:25.172 [2024-04-24 21:23:50.752156] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:25.172 [2024-04-24 21:23:50.752171] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:25.172 [2024-04-24 21:23:50.752190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:25.172 [2024-04-24 21:23:50.752212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:25.172 [2024-04-24 21:23:50.752226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:25.172 [2024-04-24 21:23:50.752238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:25.172 [2024-04-24 21:23:50.752250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:25.172 [2024-04-24 21:23:50.752258] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:25.172 [2024-04-24 21:23:50.752273] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:25.172 [2024-04-24 21:23:50.752288] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:25.172 [2024-04-24 21:23:50.752310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:25.172 [2024-04-24 21:23:50.752320] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:25.172 [2024-04-24 21:23:50.752328] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:25.172 [2024-04-24 21:23:50.752343] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:25.172 [2024-04-24 21:23:50.752353] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:25.172 [2024-04-24 21:23:50.752381] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:25.172 [2024-04-24 21:23:50.752397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:25.172 [2024-04-24 21:23:50.752446] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:25.172 [2024-04-24 21:23:50.752460] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:25.172 [2024-04-24 21:23:50.752473] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:25.172 [2024-04-24 21:23:50.752481] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:25.172 [2024-04-24 21:23:50.752490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:25.172 [2024-04-24 21:23:50.752504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:25.172 [2024-04-24 21:23:50.752519] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:25.172 [2024-04-24 21:23:50.752534] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:25.172 [2024-04-24 21:23:50.752564] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:25.172 [2024-04-24 21:23:50.752576] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:25.172 [2024-04-24 21:23:50.752588] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:25.172 [2024-04-24 21:23:50.752598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:25.172 [2024-04-24 21:23:50.752645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:25.172 [2024-04-24 21:23:50.752675] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:25.172 [2024-04-24 21:23:50.752690] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:25.172 [2024-04-24 21:23:50.752703] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:25.172 [2024-04-24 21:23:50.752712] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:25.172 [2024-04-24 21:23:50.752722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:25.172 [2024-04-24 21:23:50.752736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:25.172 [2024-04-24 21:23:50.752751] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:25.172 [2024-04-24 21:23:50.752763] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:25.172 [2024-04-24 21:23:50.752777] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:25.172 [2024-04-24 21:23:50.752788] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:25.172 [2024-04-24 21:23:50.752796] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:25.172 [2024-04-24 21:23:50.752805] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:25.172 [2024-04-24 21:23:50.752813] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:25.172 [2024-04-24 21:23:50.752821] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:25.172 [2024-04-24 21:23:50.752846] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:25.172 [2024-04-24 21:23:50.752880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:25.173 [2024-04-24 21:23:50.752899] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:25.173 [2024-04-24 21:23:50.752911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:25.173 [2024-04-24 21:23:50.752927] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:25.173 [2024-04-24 21:23:50.752952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:25.173 [2024-04-24 21:23:50.752968] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:25.173 [2024-04-24 21:23:50.752979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:25.173 [2024-04-24 21:23:50.752996] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:25.173 [2024-04-24 21:23:50.753008] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:25.173 [2024-04-24 21:23:50.753015] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:25.173 [2024-04-24 21:23:50.753021] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:25.173 [2024-04-24 21:23:50.753030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:25.173 [2024-04-24 21:23:50.753042] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:25.173 [2024-04-24 21:23:50.753050] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:25.173 [2024-04-24 21:23:50.753058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:25.173 [2024-04-24 21:23:50.753069] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:25.173 [2024-04-24 21:23:50.753077] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:25.173 [2024-04-24 21:23:50.753086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:25.173 [2024-04-24 21:23:50.753097] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:25.173 [2024-04-24 21:23:50.753105] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:25.173 [2024-04-24 21:23:50.753114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:25.173 [2024-04-24 21:23:50.753125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:25.173 [2024-04-24 21:23:50.753145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:25.173 [2024-04-24 21:23:50.753161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:25.173 [2024-04-24 21:23:50.753173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:25.173 ===================================================== 00:09:25.173 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:25.173 ===================================================== 00:09:25.173 Controller Capabilities/Features 00:09:25.173 ================================ 00:09:25.173 Vendor ID: 4e58 00:09:25.173 Subsystem Vendor ID: 4e58 00:09:25.173 Serial Number: SPDK1 00:09:25.173 Model Number: SPDK bdev Controller 00:09:25.173 Firmware Version: 24.05 00:09:25.173 Recommended Arb Burst: 6 00:09:25.173 IEEE OUI Identifier: 8d 6b 50 00:09:25.173 Multi-path I/O 00:09:25.173 May have multiple subsystem ports: Yes 00:09:25.173 May have multiple controllers: Yes 00:09:25.173 Associated with SR-IOV VF: No 00:09:25.173 Max Data Transfer Size: 131072 00:09:25.173 Max Number of Namespaces: 32 00:09:25.173 Max Number of I/O Queues: 127 00:09:25.173 NVMe Specification Version (VS): 1.3 00:09:25.173 NVMe Specification Version (Identify): 1.3 00:09:25.173 Maximum Queue Entries: 256 00:09:25.173 Contiguous Queues Required: Yes 00:09:25.173 Arbitration Mechanisms Supported 00:09:25.173 Weighted Round Robin: Not Supported 00:09:25.173 Vendor Specific: Not Supported 00:09:25.173 Reset Timeout: 15000 ms 00:09:25.173 Doorbell Stride: 4 bytes 00:09:25.173 NVM Subsystem Reset: Not Supported 00:09:25.173 Command Sets Supported 00:09:25.173 NVM Command Set: Supported 00:09:25.173 Boot Partition: Not Supported 00:09:25.173 Memory Page Size Minimum: 4096 bytes 00:09:25.173 Memory Page Size Maximum: 4096 bytes 00:09:25.173 Persistent Memory Region: Not Supported 00:09:25.173 Optional Asynchronous Events Supported 00:09:25.173 Namespace Attribute Notices: Supported 00:09:25.173 Firmware Activation Notices: Not Supported 00:09:25.173 ANA Change Notices: Not Supported 00:09:25.173 PLE Aggregate Log Change Notices: Not Supported 00:09:25.173 LBA Status Info Alert Notices: Not Supported 00:09:25.173 EGE Aggregate Log Change Notices: Not Supported 00:09:25.173 Normal NVM Subsystem Shutdown event: Not Supported 00:09:25.173 Zone Descriptor Change Notices: Not Supported 00:09:25.173 Discovery Log Change Notices: Not Supported 00:09:25.173 Controller Attributes 00:09:25.173 128-bit Host Identifier: Supported 00:09:25.173 Non-Operational Permissive Mode: Not Supported 00:09:25.173 NVM Sets: Not Supported 00:09:25.173 Read Recovery Levels: Not Supported 00:09:25.173 Endurance Groups: Not Supported 00:09:25.173 Predictable Latency Mode: Not Supported 00:09:25.173 Traffic Based Keep ALive: Not Supported 00:09:25.173 Namespace Granularity: Not Supported 00:09:25.173 SQ Associations: Not Supported 00:09:25.173 UUID List: Not Supported 00:09:25.173 Multi-Domain Subsystem: Not Supported 00:09:25.173 Fixed Capacity Management: Not Supported 00:09:25.173 Variable Capacity Management: Not Supported 00:09:25.173 Delete Endurance Group: Not Supported 00:09:25.173 Delete NVM Set: Not Supported 00:09:25.173 Extended LBA Formats Supported: Not Supported 00:09:25.173 Flexible Data Placement Supported: Not Supported 00:09:25.173 00:09:25.173 Controller Memory Buffer Support 00:09:25.173 ================================ 00:09:25.173 Supported: No 00:09:25.173 00:09:25.173 Persistent Memory Region Support 00:09:25.173 ================================ 00:09:25.173 Supported: No 00:09:25.173 00:09:25.173 Admin Command Set Attributes 00:09:25.173 ============================ 00:09:25.173 Security Send/Receive: Not Supported 00:09:25.173 Format NVM: Not Supported 00:09:25.173 Firmware Activate/Download: Not Supported 00:09:25.173 Namespace Management: Not Supported 00:09:25.173 Device Self-Test: Not Supported 00:09:25.173 Directives: Not Supported 00:09:25.173 NVMe-MI: Not Supported 00:09:25.173 Virtualization Management: Not Supported 00:09:25.173 Doorbell Buffer Config: Not Supported 00:09:25.173 Get LBA Status Capability: Not Supported 00:09:25.173 Command & Feature Lockdown Capability: Not Supported 00:09:25.173 Abort Command Limit: 4 00:09:25.173 Async Event Request Limit: 4 00:09:25.173 Number of Firmware Slots: N/A 00:09:25.173 Firmware Slot 1 Read-Only: N/A 00:09:25.173 Firmware Activation Without Reset: N/A 00:09:25.173 Multiple Update Detection Support: N/A 00:09:25.173 Firmware Update Granularity: No Information Provided 00:09:25.173 Per-Namespace SMART Log: No 00:09:25.173 Asymmetric Namespace Access Log Page: Not Supported 00:09:25.173 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:25.173 Command Effects Log Page: Supported 00:09:25.173 Get Log Page Extended Data: Supported 00:09:25.173 Telemetry Log Pages: Not Supported 00:09:25.173 Persistent Event Log Pages: Not Supported 00:09:25.173 Supported Log Pages Log Page: May Support 00:09:25.173 Commands Supported & Effects Log Page: Not Supported 00:09:25.173 Feature Identifiers & Effects Log Page:May Support 00:09:25.173 NVMe-MI Commands & Effects Log Page: May Support 00:09:25.173 Data Area 4 for Telemetry Log: Not Supported 00:09:25.173 Error Log Page Entries Supported: 128 00:09:25.173 Keep Alive: Supported 00:09:25.173 Keep Alive Granularity: 10000 ms 00:09:25.173 00:09:25.173 NVM Command Set Attributes 00:09:25.173 ========================== 00:09:25.173 Submission Queue Entry Size 00:09:25.173 Max: 64 00:09:25.173 Min: 64 00:09:25.173 Completion Queue Entry Size 00:09:25.173 Max: 16 00:09:25.173 Min: 16 00:09:25.173 Number of Namespaces: 32 00:09:25.173 Compare Command: Supported 00:09:25.173 Write Uncorrectable Command: Not Supported 00:09:25.173 Dataset Management Command: Supported 00:09:25.174 Write Zeroes Command: Supported 00:09:25.174 Set Features Save Field: Not Supported 00:09:25.174 Reservations: Not Supported 00:09:25.174 Timestamp: Not Supported 00:09:25.174 Copy: Supported 00:09:25.174 Volatile Write Cache: Present 00:09:25.174 Atomic Write Unit (Normal): 1 00:09:25.174 Atomic Write Unit (PFail): 1 00:09:25.174 Atomic Compare & Write Unit: 1 00:09:25.174 Fused Compare & Write: Supported 00:09:25.174 Scatter-Gather List 00:09:25.174 SGL Command Set: Supported (Dword aligned) 00:09:25.174 SGL Keyed: Not Supported 00:09:25.174 SGL Bit Bucket Descriptor: Not Supported 00:09:25.174 SGL Metadata Pointer: Not Supported 00:09:25.174 Oversized SGL: Not Supported 00:09:25.174 SGL Metadata Address: Not Supported 00:09:25.174 SGL Offset: Not Supported 00:09:25.174 Transport SGL Data Block: Not Supported 00:09:25.174 Replay Protected Memory Block: Not Supported 00:09:25.174 00:09:25.174 Firmware Slot Information 00:09:25.174 ========================= 00:09:25.174 Active slot: 1 00:09:25.174 Slot 1 Firmware Revision: 24.05 00:09:25.174 00:09:25.174 00:09:25.174 Commands Supported and Effects 00:09:25.174 ============================== 00:09:25.174 Admin Commands 00:09:25.174 -------------- 00:09:25.174 Get Log Page (02h): Supported 00:09:25.174 Identify (06h): Supported 00:09:25.174 Abort (08h): Supported 00:09:25.174 Set Features (09h): Supported 00:09:25.174 Get Features (0Ah): Supported 00:09:25.174 Asynchronous Event Request (0Ch): Supported 00:09:25.174 Keep Alive (18h): Supported 00:09:25.174 I/O Commands 00:09:25.174 ------------ 00:09:25.174 Flush (00h): Supported LBA-Change 00:09:25.174 Write (01h): Supported LBA-Change 00:09:25.174 Read (02h): Supported 00:09:25.174 Compare (05h): Supported 00:09:25.174 Write Zeroes (08h): Supported LBA-Change 00:09:25.174 Dataset Management (09h): Supported LBA-Change 00:09:25.174 Copy (19h): Supported LBA-Change 00:09:25.174 Unknown (79h): Supported LBA-Change 00:09:25.174 Unknown (7Ah): Supported 00:09:25.174 00:09:25.174 Error Log 00:09:25.174 ========= 00:09:25.174 00:09:25.174 Arbitration 00:09:25.174 =========== 00:09:25.174 Arbitration Burst: 1 00:09:25.174 00:09:25.174 Power Management 00:09:25.174 ================ 00:09:25.174 Number of Power States: 1 00:09:25.174 Current Power State: Power State #0 00:09:25.174 Power State #0: 00:09:25.174 Max Power: 0.00 W 00:09:25.174 Non-Operational State: Operational 00:09:25.174 Entry Latency: Not Reported 00:09:25.174 Exit Latency: Not Reported 00:09:25.174 Relative Read Throughput: 0 00:09:25.174 Relative Read Latency: 0 00:09:25.174 Relative Write Throughput: 0 00:09:25.174 Relative Write Latency: 0 00:09:25.174 Idle Power: Not Reported 00:09:25.174 Active Power: Not Reported 00:09:25.174 Non-Operational Permissive Mode: Not Supported 00:09:25.174 00:09:25.174 Health Information 00:09:25.174 ================== 00:09:25.174 Critical Warnings: 00:09:25.174 Available Spare Space: OK 00:09:25.174 Temperature: OK 00:09:25.174 Device Reliability: OK 00:09:25.174 Read Only: No 00:09:25.174 Volatile Memory Backup: OK 00:09:25.174 Current Temperature: 0 Kelvin (-2[2024-04-24 21:23:50.753303] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:25.174 [2024-04-24 21:23:50.753320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:25.174 [2024-04-24 21:23:50.753356] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:25.174 [2024-04-24 21:23:50.753373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:25.174 [2024-04-24 21:23:50.753383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:25.174 [2024-04-24 21:23:50.753393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:25.174 [2024-04-24 21:23:50.753403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:25.174 [2024-04-24 21:23:50.755646] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:25.174 [2024-04-24 21:23:50.755671] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:25.174 [2024-04-24 21:23:50.755927] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:25.174 [2024-04-24 21:23:50.756013] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:25.174 [2024-04-24 21:23:50.756031] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:25.174 [2024-04-24 21:23:50.756925] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:25.174 [2024-04-24 21:23:50.756961] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:25.174 [2024-04-24 21:23:50.757014] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:25.174 [2024-04-24 21:23:50.758988] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:25.174 73 Celsius) 00:09:25.174 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:25.174 Available Spare: 0% 00:09:25.174 Available Spare Threshold: 0% 00:09:25.174 Life Percentage Used: 0% 00:09:25.174 Data Units Read: 0 00:09:25.174 Data Units Written: 0 00:09:25.174 Host Read Commands: 0 00:09:25.174 Host Write Commands: 0 00:09:25.174 Controller Busy Time: 0 minutes 00:09:25.174 Power Cycles: 0 00:09:25.174 Power On Hours: 0 hours 00:09:25.174 Unsafe Shutdowns: 0 00:09:25.174 Unrecoverable Media Errors: 0 00:09:25.174 Lifetime Error Log Entries: 0 00:09:25.174 Warning Temperature Time: 0 minutes 00:09:25.174 Critical Temperature Time: 0 minutes 00:09:25.174 00:09:25.174 Number of Queues 00:09:25.174 ================ 00:09:25.174 Number of I/O Submission Queues: 127 00:09:25.174 Number of I/O Completion Queues: 127 00:09:25.174 00:09:25.174 Active Namespaces 00:09:25.174 ================= 00:09:25.174 Namespace ID:1 00:09:25.174 Error Recovery Timeout: Unlimited 00:09:25.174 Command Set Identifier: NVM (00h) 00:09:25.174 Deallocate: Supported 00:09:25.174 Deallocated/Unwritten Error: Not Supported 00:09:25.174 Deallocated Read Value: Unknown 00:09:25.174 Deallocate in Write Zeroes: Not Supported 00:09:25.174 Deallocated Guard Field: 0xFFFF 00:09:25.174 Flush: Supported 00:09:25.174 Reservation: Supported 00:09:25.174 Namespace Sharing Capabilities: Multiple Controllers 00:09:25.174 Size (in LBAs): 131072 (0GiB) 00:09:25.174 Capacity (in LBAs): 131072 (0GiB) 00:09:25.174 Utilization (in LBAs): 131072 (0GiB) 00:09:25.174 NGUID: 285DDECE4712471CB8BC9DCD76CDC454 00:09:25.174 UUID: 285ddece-4712-471c-b8bc-9dcd76cdc454 00:09:25.174 Thin Provisioning: Not Supported 00:09:25.174 Per-NS Atomic Units: Yes 00:09:25.174 Atomic Boundary Size (Normal): 0 00:09:25.174 Atomic Boundary Size (PFail): 0 00:09:25.174 Atomic Boundary Offset: 0 00:09:25.174 Maximum Single Source Range Length: 65535 00:09:25.174 Maximum Copy Length: 65535 00:09:25.174 Maximum Source Range Count: 1 00:09:25.174 NGUID/EUI64 Never Reused: No 00:09:25.174 Namespace Write Protected: No 00:09:25.174 Number of LBA Formats: 1 00:09:25.174 Current LBA Format: LBA Format #00 00:09:25.174 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:25.174 00:09:25.174 21:23:50 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:25.174 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.432 [2024-04-24 21:23:50.988465] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:30.705 [2024-04-24 21:23:56.011094] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:30.705 Initializing NVMe Controllers 00:09:30.705 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:30.705 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:30.705 Initialization complete. Launching workers. 00:09:30.705 ======================================================== 00:09:30.705 Latency(us) 00:09:30.705 Device Information : IOPS MiB/s Average min max 00:09:30.705 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34293.04 133.96 3731.86 1203.85 9000.73 00:09:30.705 ======================================================== 00:09:30.705 Total : 34293.04 133.96 3731.86 1203.85 9000.73 00:09:30.705 00:09:30.705 21:23:56 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:30.705 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.705 [2024-04-24 21:23:56.254222] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:35.980 [2024-04-24 21:24:01.289180] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:35.980 Initializing NVMe Controllers 00:09:35.980 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:35.980 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:35.980 Initialization complete. Launching workers. 00:09:35.980 ======================================================== 00:09:35.980 Latency(us) 00:09:35.980 Device Information : IOPS MiB/s Average min max 00:09:35.980 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16006.40 62.52 8005.19 6008.55 15948.31 00:09:35.980 ======================================================== 00:09:35.980 Total : 16006.40 62.52 8005.19 6008.55 15948.31 00:09:35.980 00:09:35.980 21:24:01 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:09:35.980 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.980 [2024-04-24 21:24:01.498241] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:41.259 [2024-04-24 21:24:06.568993] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:41.259 Initializing NVMe Controllers 00:09:41.259 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:41.259 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:41.259 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:09:41.259 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:09:41.259 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:09:41.259 Initialization complete. Launching workers. 00:09:41.259 Starting thread on core 2 00:09:41.259 Starting thread on core 3 00:09:41.259 Starting thread on core 1 00:09:41.259 21:24:06 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:09:41.259 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.259 [2024-04-24 21:24:06.872303] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:45.484 [2024-04-24 21:24:10.400895] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:45.484 Initializing NVMe Controllers 00:09:45.484 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:45.484 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:45.484 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:09:45.484 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:09:45.484 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:09:45.484 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:09:45.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:09:45.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:09:45.484 Initialization complete. Launching workers. 00:09:45.484 Starting thread on core 1 with urgent priority queue 00:09:45.484 Starting thread on core 2 with urgent priority queue 00:09:45.484 Starting thread on core 3 with urgent priority queue 00:09:45.484 Starting thread on core 0 with urgent priority queue 00:09:45.484 SPDK bdev Controller (SPDK1 ) core 0: 4866.00 IO/s 20.55 secs/100000 ios 00:09:45.484 SPDK bdev Controller (SPDK1 ) core 1: 5101.33 IO/s 19.60 secs/100000 ios 00:09:45.484 SPDK bdev Controller (SPDK1 ) core 2: 4613.67 IO/s 21.67 secs/100000 ios 00:09:45.484 SPDK bdev Controller (SPDK1 ) core 3: 4634.67 IO/s 21.58 secs/100000 ios 00:09:45.484 ======================================================== 00:09:45.484 00:09:45.484 21:24:10 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:45.484 EAL: No free 2048 kB hugepages reported on node 1 00:09:45.484 [2024-04-24 21:24:10.699566] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:45.484 [2024-04-24 21:24:10.732203] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:45.484 Initializing NVMe Controllers 00:09:45.484 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:45.484 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:45.484 Namespace ID: 1 size: 0GB 00:09:45.484 Initialization complete. 00:09:45.484 INFO: using host memory buffer for IO 00:09:45.484 Hello world! 00:09:45.484 21:24:10 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:45.484 EAL: No free 2048 kB hugepages reported on node 1 00:09:45.484 [2024-04-24 21:24:11.022992] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:46.445 Initializing NVMe Controllers 00:09:46.445 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:46.445 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:46.445 Initialization complete. Launching workers. 00:09:46.445 submit (in ns) avg, min, max = 9055.0, 3481.1, 4024095.6 00:09:46.445 complete (in ns) avg, min, max = 24804.0, 2040.0, 6992928.9 00:09:46.445 00:09:46.445 Submit histogram 00:09:46.445 ================ 00:09:46.445 Range in us Cumulative Count 00:09:46.445 3.461 - 3.484: 0.0075% ( 1) 00:09:46.445 3.484 - 3.508: 0.2189% ( 28) 00:09:46.445 3.508 - 3.532: 1.0418% ( 109) 00:09:46.445 3.532 - 3.556: 3.6388% ( 344) 00:09:46.445 3.556 - 3.579: 9.0442% ( 716) 00:09:46.445 3.579 - 3.603: 16.9863% ( 1052) 00:09:46.445 3.603 - 3.627: 26.7779% ( 1297) 00:09:46.445 3.627 - 3.650: 38.0492% ( 1493) 00:09:46.445 3.650 - 3.674: 46.2479% ( 1086) 00:09:46.445 3.674 - 3.698: 52.5140% ( 830) 00:09:46.445 3.698 - 3.721: 56.8700% ( 577) 00:09:46.445 3.721 - 3.745: 61.1581% ( 568) 00:09:46.445 3.745 - 3.769: 64.3742% ( 426) 00:09:46.445 3.769 - 3.793: 68.0734% ( 490) 00:09:46.445 3.793 - 3.816: 70.7685% ( 357) 00:09:46.445 3.816 - 3.840: 74.7093% ( 522) 00:09:46.445 3.840 - 3.864: 78.9068% ( 556) 00:09:46.445 3.864 - 3.887: 82.7797% ( 513) 00:09:46.446 3.887 - 3.911: 85.5428% ( 366) 00:09:46.446 3.911 - 3.935: 87.5057% ( 260) 00:09:46.446 3.935 - 3.959: 88.8646% ( 180) 00:09:46.446 3.959 - 3.982: 90.3141% ( 192) 00:09:46.446 3.982 - 4.006: 91.5295% ( 161) 00:09:46.446 4.006 - 4.030: 92.3524% ( 109) 00:09:46.446 4.030 - 4.053: 93.1753% ( 109) 00:09:46.446 4.053 - 4.077: 94.1039% ( 123) 00:09:46.446 4.077 - 4.101: 94.8286% ( 96) 00:09:46.446 4.101 - 4.124: 95.3722% ( 72) 00:09:46.446 4.124 - 4.148: 95.8025% ( 57) 00:09:46.446 4.148 - 4.172: 96.0818% ( 37) 00:09:46.446 4.172 - 4.196: 96.3159% ( 31) 00:09:46.446 4.196 - 4.219: 96.5046% ( 25) 00:09:46.446 4.219 - 4.243: 96.6027% ( 13) 00:09:46.446 4.243 - 4.267: 96.7311% ( 17) 00:09:46.446 4.267 - 4.290: 96.8292% ( 13) 00:09:46.446 4.290 - 4.314: 96.9198% ( 12) 00:09:46.446 4.314 - 4.338: 97.0180% ( 13) 00:09:46.446 4.338 - 4.361: 97.1312% ( 15) 00:09:46.446 4.361 - 4.385: 97.1690% ( 5) 00:09:46.446 4.385 - 4.409: 97.2067% ( 5) 00:09:46.446 4.409 - 4.433: 97.2294% ( 3) 00:09:46.446 4.433 - 4.456: 97.2671% ( 5) 00:09:46.446 4.456 - 4.480: 97.2822% ( 2) 00:09:46.446 4.480 - 4.504: 97.3275% ( 6) 00:09:46.446 4.504 - 4.527: 97.3350% ( 1) 00:09:46.446 4.599 - 4.622: 97.3501% ( 2) 00:09:46.446 4.622 - 4.646: 97.3803% ( 4) 00:09:46.446 4.646 - 4.670: 97.4030% ( 3) 00:09:46.446 4.670 - 4.693: 97.4256% ( 3) 00:09:46.446 4.693 - 4.717: 97.4785% ( 7) 00:09:46.446 4.717 - 4.741: 97.5087% ( 4) 00:09:46.446 4.741 - 4.764: 97.5540% ( 6) 00:09:46.446 4.764 - 4.788: 97.5842% ( 4) 00:09:46.446 4.788 - 4.812: 97.5993% ( 2) 00:09:46.446 4.812 - 4.836: 97.6446% ( 6) 00:09:46.446 4.836 - 4.859: 97.6899% ( 6) 00:09:46.446 4.859 - 4.883: 97.7578% ( 9) 00:09:46.446 4.883 - 4.907: 97.7956% ( 5) 00:09:46.446 4.907 - 4.930: 97.8409% ( 6) 00:09:46.446 4.930 - 4.954: 97.8862% ( 6) 00:09:46.446 4.954 - 4.978: 97.9239% ( 5) 00:09:46.446 4.978 - 5.001: 97.9390% ( 2) 00:09:46.446 5.001 - 5.025: 97.9767% ( 5) 00:09:46.446 5.025 - 5.049: 97.9918% ( 2) 00:09:46.446 5.049 - 5.073: 98.0220% ( 4) 00:09:46.446 5.073 - 5.096: 98.0296% ( 1) 00:09:46.446 5.096 - 5.120: 98.0371% ( 1) 00:09:46.446 5.120 - 5.144: 98.0447% ( 1) 00:09:46.446 5.167 - 5.191: 98.0598% ( 2) 00:09:46.446 5.191 - 5.215: 98.0673% ( 1) 00:09:46.446 5.215 - 5.239: 98.0749% ( 1) 00:09:46.446 5.262 - 5.286: 98.0824% ( 1) 00:09:46.446 5.310 - 5.333: 98.0900% ( 1) 00:09:46.446 5.333 - 5.357: 98.0975% ( 1) 00:09:46.446 5.404 - 5.428: 98.1051% ( 1) 00:09:46.446 5.428 - 5.452: 98.1202% ( 2) 00:09:46.446 5.452 - 5.476: 98.1277% ( 1) 00:09:46.446 5.476 - 5.499: 98.1353% ( 1) 00:09:46.446 5.499 - 5.523: 98.1428% ( 1) 00:09:46.446 5.594 - 5.618: 98.1504% ( 1) 00:09:46.446 5.641 - 5.665: 98.1579% ( 1) 00:09:46.446 5.665 - 5.689: 98.1655% ( 1) 00:09:46.446 5.713 - 5.736: 98.1730% ( 1) 00:09:46.446 5.736 - 5.760: 98.1806% ( 1) 00:09:46.446 5.760 - 5.784: 98.1881% ( 1) 00:09:46.446 5.807 - 5.831: 98.1957% ( 1) 00:09:46.446 5.855 - 5.879: 98.2032% ( 1) 00:09:46.446 5.879 - 5.902: 98.2108% ( 1) 00:09:46.446 5.997 - 6.021: 98.2183% ( 1) 00:09:46.446 6.068 - 6.116: 98.2259% ( 1) 00:09:46.446 6.116 - 6.163: 98.2334% ( 1) 00:09:46.446 6.258 - 6.305: 98.2485% ( 2) 00:09:46.446 6.353 - 6.400: 98.2561% ( 1) 00:09:46.446 6.400 - 6.447: 98.2636% ( 1) 00:09:46.446 6.447 - 6.495: 98.2712% ( 1) 00:09:46.446 6.495 - 6.542: 98.2787% ( 1) 00:09:46.446 6.684 - 6.732: 98.2863% ( 1) 00:09:46.446 6.779 - 6.827: 98.2938% ( 1) 00:09:46.446 6.827 - 6.874: 98.3014% ( 1) 00:09:46.446 6.969 - 7.016: 98.3089% ( 1) 00:09:46.446 7.016 - 7.064: 98.3165% ( 1) 00:09:46.446 7.206 - 7.253: 98.3240% ( 1) 00:09:46.446 7.253 - 7.301: 98.3316% ( 1) 00:09:46.446 7.301 - 7.348: 98.3391% ( 1) 00:09:46.446 7.348 - 7.396: 98.3467% ( 1) 00:09:46.446 7.443 - 7.490: 98.3618% ( 2) 00:09:46.446 7.490 - 7.538: 98.3693% ( 1) 00:09:46.446 7.538 - 7.585: 98.3769% ( 1) 00:09:46.446 7.585 - 7.633: 98.3844% ( 1) 00:09:46.446 7.633 - 7.680: 98.4071% ( 3) 00:09:46.446 7.727 - 7.775: 98.4146% ( 1) 00:09:46.446 7.775 - 7.822: 98.4222% ( 1) 00:09:46.446 7.822 - 7.870: 98.4297% ( 1) 00:09:46.446 7.964 - 8.012: 98.4373% ( 1) 00:09:46.446 8.059 - 8.107: 98.4448% ( 1) 00:09:46.446 8.107 - 8.154: 98.4599% ( 2) 00:09:46.446 8.154 - 8.201: 98.4826% ( 3) 00:09:46.446 8.201 - 8.249: 98.4901% ( 1) 00:09:46.446 8.296 - 8.344: 98.5052% ( 2) 00:09:46.446 8.344 - 8.391: 98.5128% ( 1) 00:09:46.446 8.391 - 8.439: 98.5279% ( 2) 00:09:46.446 8.439 - 8.486: 98.5430% ( 2) 00:09:46.446 8.533 - 8.581: 98.5505% ( 1) 00:09:46.446 8.676 - 8.723: 98.5656% ( 2) 00:09:46.446 8.723 - 8.770: 98.5732% ( 1) 00:09:46.446 8.770 - 8.818: 98.5883% ( 2) 00:09:46.446 8.960 - 9.007: 98.5958% ( 1) 00:09:46.446 9.102 - 9.150: 98.6034% ( 1) 00:09:46.446 9.197 - 9.244: 98.6260% ( 3) 00:09:46.446 9.244 - 9.292: 98.6335% ( 1) 00:09:46.446 9.292 - 9.339: 98.6486% ( 2) 00:09:46.446 9.387 - 9.434: 98.6562% ( 1) 00:09:46.446 9.481 - 9.529: 98.6637% ( 1) 00:09:46.446 9.624 - 9.671: 98.6713% ( 1) 00:09:46.446 9.908 - 9.956: 98.6788% ( 1) 00:09:46.446 9.956 - 10.003: 98.6939% ( 2) 00:09:46.446 10.003 - 10.050: 98.7015% ( 1) 00:09:46.446 10.050 - 10.098: 98.7090% ( 1) 00:09:46.446 10.145 - 10.193: 98.7166% ( 1) 00:09:46.446 10.193 - 10.240: 98.7392% ( 3) 00:09:46.446 10.287 - 10.335: 98.7468% ( 1) 00:09:46.446 10.382 - 10.430: 98.7543% ( 1) 00:09:46.446 10.477 - 10.524: 98.7619% ( 1) 00:09:46.446 10.572 - 10.619: 98.7770% ( 2) 00:09:46.446 10.667 - 10.714: 98.7921% ( 2) 00:09:46.446 10.714 - 10.761: 98.8147% ( 3) 00:09:46.446 10.809 - 10.856: 98.8223% ( 1) 00:09:46.446 10.904 - 10.951: 98.8374% ( 2) 00:09:46.446 10.999 - 11.046: 98.8449% ( 1) 00:09:46.446 11.093 - 11.141: 98.8525% ( 1) 00:09:46.446 11.188 - 11.236: 98.8600% ( 1) 00:09:46.446 11.236 - 11.283: 98.8676% ( 1) 00:09:46.446 11.520 - 11.567: 98.8751% ( 1) 00:09:46.446 11.662 - 11.710: 98.8827% ( 1) 00:09:46.446 12.089 - 12.136: 98.8902% ( 1) 00:09:46.446 12.136 - 12.231: 98.8978% ( 1) 00:09:46.446 12.326 - 12.421: 98.9053% ( 1) 00:09:46.446 12.421 - 12.516: 98.9129% ( 1) 00:09:46.446 12.516 - 12.610: 98.9204% ( 1) 00:09:46.446 12.800 - 12.895: 98.9280% ( 1) 00:09:46.446 13.084 - 13.179: 98.9355% ( 1) 00:09:46.446 13.843 - 13.938: 98.9431% ( 1) 00:09:46.446 14.033 - 14.127: 98.9506% ( 1) 00:09:46.446 14.222 - 14.317: 98.9582% ( 1) 00:09:46.446 14.317 - 14.412: 98.9657% ( 1) 00:09:46.446 14.696 - 14.791: 98.9733% ( 1) 00:09:46.446 14.791 - 14.886: 98.9808% ( 1) 00:09:46.446 14.886 - 14.981: 98.9884% ( 1) 00:09:46.446 17.067 - 17.161: 98.9959% ( 1) 00:09:46.446 17.161 - 17.256: 99.0035% ( 1) 00:09:46.446 17.256 - 17.351: 99.0110% ( 1) 00:09:46.446 17.351 - 17.446: 99.0337% ( 3) 00:09:46.446 17.446 - 17.541: 99.0639% ( 4) 00:09:46.446 17.541 - 17.636: 99.1167% ( 7) 00:09:46.447 17.636 - 17.730: 99.1318% ( 2) 00:09:46.447 17.730 - 17.825: 99.1545% ( 3) 00:09:46.447 17.825 - 17.920: 99.2149% ( 8) 00:09:46.447 17.920 - 18.015: 99.2753% ( 8) 00:09:46.447 18.015 - 18.110: 99.3356% ( 8) 00:09:46.447 18.110 - 18.204: 99.3885% ( 7) 00:09:46.447 18.204 - 18.299: 99.4262% ( 5) 00:09:46.447 18.299 - 18.394: 99.4866% ( 8) 00:09:46.447 18.394 - 18.489: 99.5395% ( 7) 00:09:46.447 18.489 - 18.584: 99.6074% ( 9) 00:09:46.447 18.584 - 18.679: 99.6452% ( 5) 00:09:46.447 18.679 - 18.773: 99.7056% ( 8) 00:09:46.447 18.773 - 18.868: 99.7282% ( 3) 00:09:46.447 18.868 - 18.963: 99.7358% ( 1) 00:09:46.447 18.963 - 19.058: 99.7584% ( 3) 00:09:46.447 19.058 - 19.153: 99.7660% ( 1) 00:09:46.447 19.153 - 19.247: 99.7886% ( 3) 00:09:46.447 19.247 - 19.342: 99.7962% ( 1) 00:09:46.447 19.437 - 19.532: 99.8113% ( 2) 00:09:46.447 19.627 - 19.721: 99.8264% ( 2) 00:09:46.447 20.290 - 20.385: 99.8339% ( 1) 00:09:46.447 20.480 - 20.575: 99.8415% ( 1) 00:09:46.447 22.187 - 22.281: 99.8490% ( 1) 00:09:46.447 28.444 - 28.634: 99.8566% ( 1) 00:09:46.447 28.824 - 29.013: 99.8641% ( 1) 00:09:46.447 29.772 - 29.961: 99.8717% ( 1) 00:09:46.447 3980.705 - 4004.978: 99.9547% ( 11) 00:09:46.447 4004.978 - 4029.250: 100.0000% ( 6) 00:09:46.447 00:09:46.447 Complete histogram 00:09:46.447 ================== 00:09:46.447 Range in us Cumulative Count 00:09:46.447 2.039 - 2.050: 5.3903% ( 714) 00:09:46.447 2.050 - 2.062: 12.6680% ( 964) 00:09:46.447 2.062 - 2.074: 14.6082% ( 257) 00:09:46.447 2.074 - 2.086: 43.7491% ( 3860) 00:09:46.447 2.086 - 2.098: 57.2173% ( 1784) 00:09:46.447 2.098 - 2.110: 59.9804% ( 366) 00:09:46.447 2.110 - 2.121: 64.9253% ( 655) 00:09:46.447 2.121 - 2.133: 66.2011% ( 169) 00:09:46.447 2.133 - 2.145: 68.6018% ( 318) 00:09:46.447 2.145 - 2.157: 78.7181% ( 1340) 00:09:46.447 2.157 - 2.169: 81.9493% ( 428) 00:09:46.447 2.169 - 2.181: 83.0288% ( 143) 00:09:46.447 2.181 - 2.193: 84.9313% ( 252) 00:09:46.447 2.193 - 2.204: 86.0562% ( 149) 00:09:46.447 2.204 - 2.216: 87.0225% ( 128) 00:09:46.447 2.216 - 2.228: 91.4465% ( 586) 00:09:46.447 2.228 - 2.240: 93.4244% ( 262) 00:09:46.447 2.240 - 2.252: 93.9680% ( 72) 00:09:46.447 2.252 - 2.264: 94.5493% ( 77) 00:09:46.447 2.264 - 2.276: 94.8664% ( 42) 00:09:46.447 2.276 - 2.287: 95.0249% ( 21) 00:09:46.447 2.287 - 2.299: 95.2967% ( 36) 00:09:46.447 2.299 - 2.311: 95.5609% ( 35) 00:09:46.447 2.311 - 2.323: 95.6666% ( 14) 00:09:46.447 2.323 - 2.335: 95.7346% ( 9) 00:09:46.447 2.335 - 2.347: 95.8856% ( 20) 00:09:46.447 2.347 - 2.359: 95.9837% ( 13) 00:09:46.447 2.359 - 2.370: 96.1498% ( 22) 00:09:46.447 2.370 - 2.382: 96.4593% ( 41) 00:09:46.447 2.382 - 2.394: 96.7009% ( 32) 00:09:46.447 2.394 - 2.406: 96.9953% ( 39) 00:09:46.447 2.406 - 2.418: 97.1841% ( 25) 00:09:46.447 2.418 - 2.430: 97.3803% ( 26) 00:09:46.447 2.430 - 2.441: 97.5842% ( 27) 00:09:46.447 2.441 - 2.453: 97.7125% ( 17) 00:09:46.447 2.453 - 2.465: 97.8560% ( 19) 00:09:46.447 2.465 - 2.477: 97.9541% ( 13) 00:09:46.447 2.477 - 2.489: 98.0673% ( 15) 00:09:46.447 2.489 - 2.501: 98.1504% ( 11) 00:09:46.447 2.501 - 2.513: 98.2108% ( 8) 00:09:46.447 2.513 - 2.524: 98.2636% ( 7) 00:09:46.447 2.524 - 2.536: 98.2863% ( 3) 00:09:46.447 2.536 - 2.548: 98.3391% ( 7) 00:09:46.447 2.548 - 2.560: 98.3618% ( 3) 00:09:46.447 2.560 - 2.572: 98.3693% ( 1) 00:09:46.447 2.572 - 2.584: 98.3844% ( 2) 00:09:46.447 2.596 - 2.607: 98.3920% ( 1) 00:09:46.447 2.631 - 2.643: 98.3995% ( 1) 00:09:46.447 2.643 - 2.655: 98.4071% ( 1) 00:09:46.447 2.679 - 2.690: 9[2024-04-24 21:24:12.043984] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:46.447 8.4146% ( 1) 00:09:46.447 2.726 - 2.738: 98.4222% ( 1) 00:09:46.447 2.750 - 2.761: 98.4448% ( 3) 00:09:46.447 3.129 - 3.153: 98.4524% ( 1) 00:09:46.447 3.224 - 3.247: 98.4599% ( 1) 00:09:46.447 3.247 - 3.271: 98.4675% ( 1) 00:09:46.447 3.271 - 3.295: 98.4750% ( 1) 00:09:46.447 3.295 - 3.319: 98.4826% ( 1) 00:09:46.447 3.319 - 3.342: 98.4901% ( 1) 00:09:46.447 3.342 - 3.366: 98.4977% ( 1) 00:09:46.447 3.366 - 3.390: 98.5128% ( 2) 00:09:46.447 3.390 - 3.413: 98.5203% ( 1) 00:09:46.447 3.413 - 3.437: 98.5430% ( 3) 00:09:46.447 3.437 - 3.461: 98.5581% ( 2) 00:09:46.447 3.461 - 3.484: 98.5656% ( 1) 00:09:46.447 3.484 - 3.508: 98.5807% ( 2) 00:09:46.447 3.603 - 3.627: 98.5883% ( 1) 00:09:46.447 3.627 - 3.650: 98.5958% ( 1) 00:09:46.447 3.650 - 3.674: 98.6034% ( 1) 00:09:46.447 3.674 - 3.698: 98.6109% ( 1) 00:09:46.447 3.769 - 3.793: 98.6185% ( 1) 00:09:46.447 3.816 - 3.840: 98.6260% ( 1) 00:09:46.447 4.030 - 4.053: 98.6335% ( 1) 00:09:46.447 4.077 - 4.101: 98.6411% ( 1) 00:09:46.447 5.452 - 5.476: 98.6486% ( 1) 00:09:46.447 5.523 - 5.547: 98.6637% ( 2) 00:09:46.447 5.665 - 5.689: 98.6713% ( 1) 00:09:46.447 5.831 - 5.855: 98.6864% ( 2) 00:09:46.447 5.855 - 5.879: 98.6939% ( 1) 00:09:46.447 6.068 - 6.116: 98.7015% ( 1) 00:09:46.447 6.258 - 6.305: 98.7090% ( 1) 00:09:46.447 6.305 - 6.353: 98.7166% ( 1) 00:09:46.447 6.353 - 6.400: 98.7241% ( 1) 00:09:46.447 6.495 - 6.542: 98.7392% ( 2) 00:09:46.447 6.590 - 6.637: 98.7468% ( 1) 00:09:46.447 6.732 - 6.779: 98.7543% ( 1) 00:09:46.447 7.064 - 7.111: 98.7619% ( 1) 00:09:46.447 7.111 - 7.159: 98.7694% ( 1) 00:09:46.447 7.253 - 7.301: 98.7770% ( 1) 00:09:46.447 7.301 - 7.348: 98.7845% ( 1) 00:09:46.447 7.633 - 7.680: 98.7921% ( 1) 00:09:46.447 7.680 - 7.727: 98.7996% ( 1) 00:09:46.447 7.917 - 7.964: 98.8072% ( 1) 00:09:46.447 7.964 - 8.012: 98.8147% ( 1) 00:09:46.447 8.059 - 8.107: 98.8223% ( 1) 00:09:46.447 10.145 - 10.193: 98.8298% ( 1) 00:09:46.447 11.947 - 11.994: 98.8374% ( 1) 00:09:46.447 14.412 - 14.507: 98.8449% ( 1) 00:09:46.447 15.360 - 15.455: 98.8525% ( 1) 00:09:46.447 15.550 - 15.644: 98.8600% ( 1) 00:09:46.447 15.644 - 15.739: 98.8827% ( 3) 00:09:46.447 15.739 - 15.834: 98.9204% ( 5) 00:09:46.447 15.834 - 15.929: 98.9355% ( 2) 00:09:46.447 15.929 - 16.024: 98.9657% ( 4) 00:09:46.447 16.024 - 16.119: 98.9808% ( 2) 00:09:46.447 16.119 - 16.213: 99.0110% ( 4) 00:09:46.447 16.308 - 16.403: 99.0261% ( 2) 00:09:46.447 16.403 - 16.498: 99.0865% ( 8) 00:09:46.447 16.498 - 16.593: 99.1620% ( 10) 00:09:46.447 16.593 - 16.687: 99.1922% ( 4) 00:09:46.447 16.687 - 16.782: 99.2149% ( 3) 00:09:46.447 16.782 - 16.877: 99.2375% ( 3) 00:09:46.448 16.877 - 16.972: 99.2828% ( 6) 00:09:46.448 16.972 - 17.067: 99.3281% ( 6) 00:09:46.448 17.067 - 17.161: 99.3356% ( 1) 00:09:46.448 17.161 - 17.256: 99.3507% ( 2) 00:09:46.448 17.351 - 17.446: 99.3583% ( 1) 00:09:46.448 17.446 - 17.541: 99.3809% ( 3) 00:09:46.448 17.541 - 17.636: 99.3885% ( 1) 00:09:46.448 17.730 - 17.825: 99.3960% ( 1) 00:09:46.448 18.110 - 18.204: 99.4036% ( 1) 00:09:46.448 18.204 - 18.299: 99.4111% ( 1) 00:09:46.448 18.299 - 18.394: 99.4262% ( 2) 00:09:46.448 19.247 - 19.342: 99.4338% ( 1) 00:09:46.448 22.850 - 22.945: 99.4413% ( 1) 00:09:46.448 3980.705 - 4004.978: 99.8113% ( 49) 00:09:46.448 4004.978 - 4029.250: 99.9925% ( 24) 00:09:46.448 6990.507 - 7039.052: 100.0000% ( 1) 00:09:46.448 00:09:46.448 21:24:12 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:09:46.448 21:24:12 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:46.448 21:24:12 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:09:46.448 21:24:12 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:09:46.448 21:24:12 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:46.706 [2024-04-24 21:24:12.350465] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:09:46.706 [ 00:09:46.706 { 00:09:46.706 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:46.706 "subtype": "Discovery", 00:09:46.706 "listen_addresses": [], 00:09:46.706 "allow_any_host": true, 00:09:46.706 "hosts": [] 00:09:46.706 }, 00:09:46.706 { 00:09:46.706 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:46.706 "subtype": "NVMe", 00:09:46.706 "listen_addresses": [ 00:09:46.706 { 00:09:46.706 "transport": "VFIOUSER", 00:09:46.706 "trtype": "VFIOUSER", 00:09:46.706 "adrfam": "IPv4", 00:09:46.706 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:46.706 "trsvcid": "0" 00:09:46.706 } 00:09:46.706 ], 00:09:46.706 "allow_any_host": true, 00:09:46.706 "hosts": [], 00:09:46.706 "serial_number": "SPDK1", 00:09:46.706 "model_number": "SPDK bdev Controller", 00:09:46.706 "max_namespaces": 32, 00:09:46.706 "min_cntlid": 1, 00:09:46.706 "max_cntlid": 65519, 00:09:46.706 "namespaces": [ 00:09:46.706 { 00:09:46.706 "nsid": 1, 00:09:46.706 "bdev_name": "Malloc1", 00:09:46.706 "name": "Malloc1", 00:09:46.706 "nguid": "285DDECE4712471CB8BC9DCD76CDC454", 00:09:46.706 "uuid": "285ddece-4712-471c-b8bc-9dcd76cdc454" 00:09:46.706 } 00:09:46.706 ] 00:09:46.706 }, 00:09:46.706 { 00:09:46.706 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:46.706 "subtype": "NVMe", 00:09:46.706 "listen_addresses": [ 00:09:46.706 { 00:09:46.706 "transport": "VFIOUSER", 00:09:46.706 "trtype": "VFIOUSER", 00:09:46.706 "adrfam": "IPv4", 00:09:46.706 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:46.706 "trsvcid": "0" 00:09:46.706 } 00:09:46.706 ], 00:09:46.706 "allow_any_host": true, 00:09:46.706 "hosts": [], 00:09:46.706 "serial_number": "SPDK2", 00:09:46.706 "model_number": "SPDK bdev Controller", 00:09:46.706 "max_namespaces": 32, 00:09:46.706 "min_cntlid": 1, 00:09:46.706 "max_cntlid": 65519, 00:09:46.706 "namespaces": [ 00:09:46.706 { 00:09:46.706 "nsid": 1, 00:09:46.706 "bdev_name": "Malloc2", 00:09:46.706 "name": "Malloc2", 00:09:46.706 "nguid": "FFB6A5E44506473799980A505DE587DB", 00:09:46.706 "uuid": "ffb6a5e4-4506-4737-9998-0a505de587db" 00:09:46.706 } 00:09:46.706 ] 00:09:46.706 } 00:09:46.706 ] 00:09:46.706 21:24:12 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:09:46.706 21:24:12 -- target/nvmf_vfio_user.sh@34 -- # aerpid=2552437 00:09:46.706 21:24:12 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:09:46.706 21:24:12 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:09:46.706 21:24:12 -- common/autotest_common.sh@1251 -- # local i=0 00:09:46.706 21:24:12 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:46.706 21:24:12 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:46.706 21:24:12 -- common/autotest_common.sh@1262 -- # return 0 00:09:46.706 21:24:12 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:09:46.707 21:24:12 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:09:46.965 EAL: No free 2048 kB hugepages reported on node 1 00:09:46.965 [2024-04-24 21:24:12.520090] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:46.965 Malloc3 00:09:46.965 21:24:12 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:09:47.224 [2024-04-24 21:24:12.875592] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:47.224 21:24:12 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:47.490 Asynchronous Event Request test 00:09:47.490 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:47.490 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:47.490 Registering asynchronous event callbacks... 00:09:47.490 Starting namespace attribute notice tests for all controllers... 00:09:47.490 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:09:47.490 aer_cb - Changed Namespace 00:09:47.490 Cleaning up... 00:09:47.490 [ 00:09:47.490 { 00:09:47.490 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:47.490 "subtype": "Discovery", 00:09:47.490 "listen_addresses": [], 00:09:47.490 "allow_any_host": true, 00:09:47.490 "hosts": [] 00:09:47.490 }, 00:09:47.490 { 00:09:47.490 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:47.490 "subtype": "NVMe", 00:09:47.490 "listen_addresses": [ 00:09:47.490 { 00:09:47.490 "transport": "VFIOUSER", 00:09:47.490 "trtype": "VFIOUSER", 00:09:47.490 "adrfam": "IPv4", 00:09:47.490 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:47.490 "trsvcid": "0" 00:09:47.490 } 00:09:47.490 ], 00:09:47.490 "allow_any_host": true, 00:09:47.490 "hosts": [], 00:09:47.490 "serial_number": "SPDK1", 00:09:47.490 "model_number": "SPDK bdev Controller", 00:09:47.490 "max_namespaces": 32, 00:09:47.490 "min_cntlid": 1, 00:09:47.490 "max_cntlid": 65519, 00:09:47.490 "namespaces": [ 00:09:47.490 { 00:09:47.490 "nsid": 1, 00:09:47.490 "bdev_name": "Malloc1", 00:09:47.490 "name": "Malloc1", 00:09:47.490 "nguid": "285DDECE4712471CB8BC9DCD76CDC454", 00:09:47.490 "uuid": "285ddece-4712-471c-b8bc-9dcd76cdc454" 00:09:47.490 }, 00:09:47.490 { 00:09:47.490 "nsid": 2, 00:09:47.490 "bdev_name": "Malloc3", 00:09:47.490 "name": "Malloc3", 00:09:47.490 "nguid": "5AF43DC50AEA4A66806C3975C8E1FBB6", 00:09:47.490 "uuid": "5af43dc5-0aea-4a66-806c-3975c8e1fbb6" 00:09:47.490 } 00:09:47.490 ] 00:09:47.490 }, 00:09:47.490 { 00:09:47.490 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:47.490 "subtype": "NVMe", 00:09:47.490 "listen_addresses": [ 00:09:47.490 { 00:09:47.490 "transport": "VFIOUSER", 00:09:47.490 "trtype": "VFIOUSER", 00:09:47.490 "adrfam": "IPv4", 00:09:47.490 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:47.490 "trsvcid": "0" 00:09:47.490 } 00:09:47.490 ], 00:09:47.490 "allow_any_host": true, 00:09:47.490 "hosts": [], 00:09:47.490 "serial_number": "SPDK2", 00:09:47.490 "model_number": "SPDK bdev Controller", 00:09:47.490 "max_namespaces": 32, 00:09:47.490 "min_cntlid": 1, 00:09:47.490 "max_cntlid": 65519, 00:09:47.490 "namespaces": [ 00:09:47.490 { 00:09:47.490 "nsid": 1, 00:09:47.490 "bdev_name": "Malloc2", 00:09:47.490 "name": "Malloc2", 00:09:47.490 "nguid": "FFB6A5E44506473799980A505DE587DB", 00:09:47.490 "uuid": "ffb6a5e4-4506-4737-9998-0a505de587db" 00:09:47.490 } 00:09:47.490 ] 00:09:47.490 } 00:09:47.490 ] 00:09:47.490 21:24:13 -- target/nvmf_vfio_user.sh@44 -- # wait 2552437 00:09:47.490 21:24:13 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:47.490 21:24:13 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:09:47.490 21:24:13 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:09:47.490 21:24:13 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:47.490 [2024-04-24 21:24:13.148234] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:09:47.490 [2024-04-24 21:24:13.148283] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2552455 ] 00:09:47.490 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.760 [2024-04-24 21:24:13.182822] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:09:47.760 [2024-04-24 21:24:13.190961] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:47.760 [2024-04-24 21:24:13.190992] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbc68a10000 00:09:47.760 [2024-04-24 21:24:13.191958] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:47.760 [2024-04-24 21:24:13.192968] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:47.760 [2024-04-24 21:24:13.193978] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:47.760 [2024-04-24 21:24:13.194973] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:47.760 [2024-04-24 21:24:13.195983] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:47.760 [2024-04-24 21:24:13.196987] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:47.760 [2024-04-24 21:24:13.197997] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:47.760 [2024-04-24 21:24:13.199015] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:47.760 [2024-04-24 21:24:13.200005] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:47.760 [2024-04-24 21:24:13.200030] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbc68a05000 00:09:47.760 [2024-04-24 21:24:13.201143] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:47.760 [2024-04-24 21:24:13.217325] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:09:47.760 [2024-04-24 21:24:13.217358] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:09:47.760 [2024-04-24 21:24:13.222466] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:47.760 [2024-04-24 21:24:13.222523] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:47.760 [2024-04-24 21:24:13.222626] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:09:47.760 [2024-04-24 21:24:13.222659] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:09:47.760 [2024-04-24 21:24:13.222670] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:09:47.760 [2024-04-24 21:24:13.223475] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:09:47.760 [2024-04-24 21:24:13.223495] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:09:47.760 [2024-04-24 21:24:13.223514] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:09:47.760 [2024-04-24 21:24:13.224475] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:47.760 [2024-04-24 21:24:13.224496] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:09:47.760 [2024-04-24 21:24:13.224509] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:09:47.760 [2024-04-24 21:24:13.225484] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:09:47.760 [2024-04-24 21:24:13.225504] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:47.760 [2024-04-24 21:24:13.226490] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:09:47.760 [2024-04-24 21:24:13.226510] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:09:47.760 [2024-04-24 21:24:13.226519] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:09:47.760 [2024-04-24 21:24:13.226530] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:47.760 [2024-04-24 21:24:13.226641] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:09:47.760 [2024-04-24 21:24:13.226651] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:47.760 [2024-04-24 21:24:13.226659] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:09:47.760 [2024-04-24 21:24:13.227496] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:09:47.760 [2024-04-24 21:24:13.228503] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:09:47.760 [2024-04-24 21:24:13.229515] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:47.760 [2024-04-24 21:24:13.230513] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:47.761 [2024-04-24 21:24:13.230578] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:47.761 [2024-04-24 21:24:13.231528] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:09:47.761 [2024-04-24 21:24:13.231548] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:47.761 [2024-04-24 21:24:13.231557] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:09:47.761 [2024-04-24 21:24:13.231581] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:09:47.761 [2024-04-24 21:24:13.231594] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:09:47.761 [2024-04-24 21:24:13.231637] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:47.761 [2024-04-24 21:24:13.231652] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:47.761 [2024-04-24 21:24:13.231670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:47.761 [2024-04-24 21:24:13.235643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:47.761 [2024-04-24 21:24:13.235665] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:09:47.761 [2024-04-24 21:24:13.235689] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:09:47.761 [2024-04-24 21:24:13.235697] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:09:47.761 [2024-04-24 21:24:13.235704] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:47.761 [2024-04-24 21:24:13.235712] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:09:47.761 [2024-04-24 21:24:13.235720] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:09:47.761 [2024-04-24 21:24:13.235728] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:09:47.761 [2024-04-24 21:24:13.235741] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:09:47.761 [2024-04-24 21:24:13.235757] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:47.761 [2024-04-24 21:24:13.243640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:47.761 [2024-04-24 21:24:13.243667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:47.761 [2024-04-24 21:24:13.243697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:47.761 [2024-04-24 21:24:13.243710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:47.761 [2024-04-24 21:24:13.243721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:47.761 [2024-04-24 21:24:13.243730] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:09:47.761 [2024-04-24 21:24:13.243746] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:47.761 [2024-04-24 21:24:13.243760] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:47.761 [2024-04-24 21:24:13.251653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:47.761 [2024-04-24 21:24:13.251671] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:09:47.761 [2024-04-24 21:24:13.251680] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:47.761 [2024-04-24 21:24:13.251696] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:09:47.761 [2024-04-24 21:24:13.251707] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:09:47.761 [2024-04-24 21:24:13.251724] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:47.761 [2024-04-24 21:24:13.259638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:47.761 [2024-04-24 21:24:13.259698] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:09:47.761 [2024-04-24 21:24:13.259714] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:09:47.761 [2024-04-24 21:24:13.259726] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:47.761 [2024-04-24 21:24:13.259734] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:47.761 [2024-04-24 21:24:13.259744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:47.761 [2024-04-24 21:24:13.267637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:47.761 [2024-04-24 21:24:13.267660] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:09:47.761 [2024-04-24 21:24:13.267680] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:09:47.761 [2024-04-24 21:24:13.267695] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:09:47.761 [2024-04-24 21:24:13.267707] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:47.761 [2024-04-24 21:24:13.267716] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:47.761 [2024-04-24 21:24:13.267726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:47.761 [2024-04-24 21:24:13.275657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:47.761 [2024-04-24 21:24:13.275685] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:47.761 [2024-04-24 21:24:13.275701] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:47.761 [2024-04-24 21:24:13.275714] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:47.761 [2024-04-24 21:24:13.275723] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:47.761 [2024-04-24 21:24:13.275733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:47.761 [2024-04-24 21:24:13.283638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:47.761 [2024-04-24 21:24:13.283658] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:47.761 [2024-04-24 21:24:13.283672] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:09:47.761 [2024-04-24 21:24:13.283685] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:09:47.761 [2024-04-24 21:24:13.283696] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:47.761 [2024-04-24 21:24:13.283704] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:09:47.761 [2024-04-24 21:24:13.283715] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:09:47.761 [2024-04-24 21:24:13.283723] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:09:47.761 [2024-04-24 21:24:13.283731] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:09:47.761 [2024-04-24 21:24:13.283755] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:47.761 [2024-04-24 21:24:13.291640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:47.761 [2024-04-24 21:24:13.291665] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:47.761 [2024-04-24 21:24:13.299636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:47.761 [2024-04-24 21:24:13.299661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:47.761 [2024-04-24 21:24:13.307640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:47.761 [2024-04-24 21:24:13.307664] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:47.761 [2024-04-24 21:24:13.315655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:47.761 [2024-04-24 21:24:13.315681] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:47.761 [2024-04-24 21:24:13.315691] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:47.761 [2024-04-24 21:24:13.315697] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:47.761 [2024-04-24 21:24:13.315704] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:47.761 [2024-04-24 21:24:13.315713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:47.761 [2024-04-24 21:24:13.315725] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:47.761 [2024-04-24 21:24:13.315734] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:47.761 [2024-04-24 21:24:13.315742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:47.761 [2024-04-24 21:24:13.315753] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:47.761 [2024-04-24 21:24:13.315761] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:47.762 [2024-04-24 21:24:13.315770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:47.762 [2024-04-24 21:24:13.315783] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:47.762 [2024-04-24 21:24:13.315791] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:47.762 [2024-04-24 21:24:13.315800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:47.762 [2024-04-24 21:24:13.323640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:47.762 [2024-04-24 21:24:13.323682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:47.762 [2024-04-24 21:24:13.323703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:47.762 [2024-04-24 21:24:13.323728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:47.762 ===================================================== 00:09:47.762 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:47.762 ===================================================== 00:09:47.762 Controller Capabilities/Features 00:09:47.762 ================================ 00:09:47.762 Vendor ID: 4e58 00:09:47.762 Subsystem Vendor ID: 4e58 00:09:47.762 Serial Number: SPDK2 00:09:47.762 Model Number: SPDK bdev Controller 00:09:47.762 Firmware Version: 24.05 00:09:47.762 Recommended Arb Burst: 6 00:09:47.762 IEEE OUI Identifier: 8d 6b 50 00:09:47.762 Multi-path I/O 00:09:47.762 May have multiple subsystem ports: Yes 00:09:47.762 May have multiple controllers: Yes 00:09:47.762 Associated with SR-IOV VF: No 00:09:47.762 Max Data Transfer Size: 131072 00:09:47.762 Max Number of Namespaces: 32 00:09:47.762 Max Number of I/O Queues: 127 00:09:47.762 NVMe Specification Version (VS): 1.3 00:09:47.762 NVMe Specification Version (Identify): 1.3 00:09:47.762 Maximum Queue Entries: 256 00:09:47.762 Contiguous Queues Required: Yes 00:09:47.762 Arbitration Mechanisms Supported 00:09:47.762 Weighted Round Robin: Not Supported 00:09:47.762 Vendor Specific: Not Supported 00:09:47.762 Reset Timeout: 15000 ms 00:09:47.762 Doorbell Stride: 4 bytes 00:09:47.762 NVM Subsystem Reset: Not Supported 00:09:47.762 Command Sets Supported 00:09:47.762 NVM Command Set: Supported 00:09:47.762 Boot Partition: Not Supported 00:09:47.762 Memory Page Size Minimum: 4096 bytes 00:09:47.762 Memory Page Size Maximum: 4096 bytes 00:09:47.762 Persistent Memory Region: Not Supported 00:09:47.762 Optional Asynchronous Events Supported 00:09:47.762 Namespace Attribute Notices: Supported 00:09:47.762 Firmware Activation Notices: Not Supported 00:09:47.762 ANA Change Notices: Not Supported 00:09:47.762 PLE Aggregate Log Change Notices: Not Supported 00:09:47.762 LBA Status Info Alert Notices: Not Supported 00:09:47.762 EGE Aggregate Log Change Notices: Not Supported 00:09:47.762 Normal NVM Subsystem Shutdown event: Not Supported 00:09:47.762 Zone Descriptor Change Notices: Not Supported 00:09:47.762 Discovery Log Change Notices: Not Supported 00:09:47.762 Controller Attributes 00:09:47.762 128-bit Host Identifier: Supported 00:09:47.762 Non-Operational Permissive Mode: Not Supported 00:09:47.762 NVM Sets: Not Supported 00:09:47.762 Read Recovery Levels: Not Supported 00:09:47.762 Endurance Groups: Not Supported 00:09:47.762 Predictable Latency Mode: Not Supported 00:09:47.762 Traffic Based Keep ALive: Not Supported 00:09:47.762 Namespace Granularity: Not Supported 00:09:47.762 SQ Associations: Not Supported 00:09:47.762 UUID List: Not Supported 00:09:47.762 Multi-Domain Subsystem: Not Supported 00:09:47.762 Fixed Capacity Management: Not Supported 00:09:47.762 Variable Capacity Management: Not Supported 00:09:47.762 Delete Endurance Group: Not Supported 00:09:47.762 Delete NVM Set: Not Supported 00:09:47.762 Extended LBA Formats Supported: Not Supported 00:09:47.762 Flexible Data Placement Supported: Not Supported 00:09:47.762 00:09:47.762 Controller Memory Buffer Support 00:09:47.762 ================================ 00:09:47.762 Supported: No 00:09:47.762 00:09:47.762 Persistent Memory Region Support 00:09:47.762 ================================ 00:09:47.762 Supported: No 00:09:47.762 00:09:47.762 Admin Command Set Attributes 00:09:47.762 ============================ 00:09:47.762 Security Send/Receive: Not Supported 00:09:47.762 Format NVM: Not Supported 00:09:47.762 Firmware Activate/Download: Not Supported 00:09:47.762 Namespace Management: Not Supported 00:09:47.762 Device Self-Test: Not Supported 00:09:47.762 Directives: Not Supported 00:09:47.762 NVMe-MI: Not Supported 00:09:47.762 Virtualization Management: Not Supported 00:09:47.762 Doorbell Buffer Config: Not Supported 00:09:47.762 Get LBA Status Capability: Not Supported 00:09:47.762 Command & Feature Lockdown Capability: Not Supported 00:09:47.762 Abort Command Limit: 4 00:09:47.762 Async Event Request Limit: 4 00:09:47.762 Number of Firmware Slots: N/A 00:09:47.762 Firmware Slot 1 Read-Only: N/A 00:09:47.762 Firmware Activation Without Reset: N/A 00:09:47.762 Multiple Update Detection Support: N/A 00:09:47.762 Firmware Update Granularity: No Information Provided 00:09:47.762 Per-Namespace SMART Log: No 00:09:47.762 Asymmetric Namespace Access Log Page: Not Supported 00:09:47.762 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:09:47.762 Command Effects Log Page: Supported 00:09:47.762 Get Log Page Extended Data: Supported 00:09:47.762 Telemetry Log Pages: Not Supported 00:09:47.762 Persistent Event Log Pages: Not Supported 00:09:47.762 Supported Log Pages Log Page: May Support 00:09:47.762 Commands Supported & Effects Log Page: Not Supported 00:09:47.762 Feature Identifiers & Effects Log Page:May Support 00:09:47.762 NVMe-MI Commands & Effects Log Page: May Support 00:09:47.762 Data Area 4 for Telemetry Log: Not Supported 00:09:47.762 Error Log Page Entries Supported: 128 00:09:47.762 Keep Alive: Supported 00:09:47.762 Keep Alive Granularity: 10000 ms 00:09:47.762 00:09:47.762 NVM Command Set Attributes 00:09:47.762 ========================== 00:09:47.762 Submission Queue Entry Size 00:09:47.762 Max: 64 00:09:47.762 Min: 64 00:09:47.762 Completion Queue Entry Size 00:09:47.762 Max: 16 00:09:47.762 Min: 16 00:09:47.762 Number of Namespaces: 32 00:09:47.762 Compare Command: Supported 00:09:47.762 Write Uncorrectable Command: Not Supported 00:09:47.762 Dataset Management Command: Supported 00:09:47.762 Write Zeroes Command: Supported 00:09:47.762 Set Features Save Field: Not Supported 00:09:47.762 Reservations: Not Supported 00:09:47.762 Timestamp: Not Supported 00:09:47.762 Copy: Supported 00:09:47.762 Volatile Write Cache: Present 00:09:47.762 Atomic Write Unit (Normal): 1 00:09:47.762 Atomic Write Unit (PFail): 1 00:09:47.762 Atomic Compare & Write Unit: 1 00:09:47.762 Fused Compare & Write: Supported 00:09:47.762 Scatter-Gather List 00:09:47.762 SGL Command Set: Supported (Dword aligned) 00:09:47.762 SGL Keyed: Not Supported 00:09:47.762 SGL Bit Bucket Descriptor: Not Supported 00:09:47.762 SGL Metadata Pointer: Not Supported 00:09:47.762 Oversized SGL: Not Supported 00:09:47.762 SGL Metadata Address: Not Supported 00:09:47.762 SGL Offset: Not Supported 00:09:47.762 Transport SGL Data Block: Not Supported 00:09:47.762 Replay Protected Memory Block: Not Supported 00:09:47.762 00:09:47.762 Firmware Slot Information 00:09:47.762 ========================= 00:09:47.762 Active slot: 1 00:09:47.762 Slot 1 Firmware Revision: 24.05 00:09:47.762 00:09:47.762 00:09:47.762 Commands Supported and Effects 00:09:47.762 ============================== 00:09:47.762 Admin Commands 00:09:47.762 -------------- 00:09:47.762 Get Log Page (02h): Supported 00:09:47.762 Identify (06h): Supported 00:09:47.762 Abort (08h): Supported 00:09:47.762 Set Features (09h): Supported 00:09:47.762 Get Features (0Ah): Supported 00:09:47.762 Asynchronous Event Request (0Ch): Supported 00:09:47.762 Keep Alive (18h): Supported 00:09:47.762 I/O Commands 00:09:47.762 ------------ 00:09:47.762 Flush (00h): Supported LBA-Change 00:09:47.763 Write (01h): Supported LBA-Change 00:09:47.763 Read (02h): Supported 00:09:47.763 Compare (05h): Supported 00:09:47.763 Write Zeroes (08h): Supported LBA-Change 00:09:47.763 Dataset Management (09h): Supported LBA-Change 00:09:47.763 Copy (19h): Supported LBA-Change 00:09:47.763 Unknown (79h): Supported LBA-Change 00:09:47.763 Unknown (7Ah): Supported 00:09:47.763 00:09:47.763 Error Log 00:09:47.763 ========= 00:09:47.763 00:09:47.763 Arbitration 00:09:47.763 =========== 00:09:47.763 Arbitration Burst: 1 00:09:47.763 00:09:47.763 Power Management 00:09:47.763 ================ 00:09:47.763 Number of Power States: 1 00:09:47.763 Current Power State: Power State #0 00:09:47.763 Power State #0: 00:09:47.763 Max Power: 0.00 W 00:09:47.763 Non-Operational State: Operational 00:09:47.763 Entry Latency: Not Reported 00:09:47.763 Exit Latency: Not Reported 00:09:47.763 Relative Read Throughput: 0 00:09:47.763 Relative Read Latency: 0 00:09:47.763 Relative Write Throughput: 0 00:09:47.763 Relative Write Latency: 0 00:09:47.763 Idle Power: Not Reported 00:09:47.763 Active Power: Not Reported 00:09:47.763 Non-Operational Permissive Mode: Not Supported 00:09:47.763 00:09:47.763 Health Information 00:09:47.763 ================== 00:09:47.763 Critical Warnings: 00:09:47.763 Available Spare Space: OK 00:09:47.763 Temperature: OK 00:09:47.763 Device Reliability: OK 00:09:47.763 Read Only: No 00:09:47.763 Volatile Memory Backup: OK 00:09:47.763 Current Temperature: 0 Kelvin (-2[2024-04-24 21:24:13.323862] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:47.763 [2024-04-24 21:24:13.331637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:47.763 [2024-04-24 21:24:13.331685] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:09:47.763 [2024-04-24 21:24:13.331702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.763 [2024-04-24 21:24:13.331712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.763 [2024-04-24 21:24:13.331722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.763 [2024-04-24 21:24:13.331731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.763 [2024-04-24 21:24:13.331813] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:47.763 [2024-04-24 21:24:13.331834] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:09:47.763 [2024-04-24 21:24:13.332819] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:47.763 [2024-04-24 21:24:13.332888] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:09:47.763 [2024-04-24 21:24:13.332902] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:09:47.763 [2024-04-24 21:24:13.333827] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:09:47.763 [2024-04-24 21:24:13.333851] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:09:47.763 [2024-04-24 21:24:13.333904] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:09:47.763 [2024-04-24 21:24:13.336639] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:47.763 73 Celsius) 00:09:47.763 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:47.763 Available Spare: 0% 00:09:47.763 Available Spare Threshold: 0% 00:09:47.763 Life Percentage Used: 0% 00:09:47.763 Data Units Read: 0 00:09:47.763 Data Units Written: 0 00:09:47.763 Host Read Commands: 0 00:09:47.763 Host Write Commands: 0 00:09:47.763 Controller Busy Time: 0 minutes 00:09:47.763 Power Cycles: 0 00:09:47.763 Power On Hours: 0 hours 00:09:47.763 Unsafe Shutdowns: 0 00:09:47.763 Unrecoverable Media Errors: 0 00:09:47.763 Lifetime Error Log Entries: 0 00:09:47.763 Warning Temperature Time: 0 minutes 00:09:47.763 Critical Temperature Time: 0 minutes 00:09:47.763 00:09:47.763 Number of Queues 00:09:47.763 ================ 00:09:47.763 Number of I/O Submission Queues: 127 00:09:47.763 Number of I/O Completion Queues: 127 00:09:47.763 00:09:47.763 Active Namespaces 00:09:47.763 ================= 00:09:47.763 Namespace ID:1 00:09:47.763 Error Recovery Timeout: Unlimited 00:09:47.763 Command Set Identifier: NVM (00h) 00:09:47.763 Deallocate: Supported 00:09:47.763 Deallocated/Unwritten Error: Not Supported 00:09:47.763 Deallocated Read Value: Unknown 00:09:47.763 Deallocate in Write Zeroes: Not Supported 00:09:47.763 Deallocated Guard Field: 0xFFFF 00:09:47.763 Flush: Supported 00:09:47.763 Reservation: Supported 00:09:47.763 Namespace Sharing Capabilities: Multiple Controllers 00:09:47.763 Size (in LBAs): 131072 (0GiB) 00:09:47.763 Capacity (in LBAs): 131072 (0GiB) 00:09:47.763 Utilization (in LBAs): 131072 (0GiB) 00:09:47.763 NGUID: FFB6A5E44506473799980A505DE587DB 00:09:47.763 UUID: ffb6a5e4-4506-4737-9998-0a505de587db 00:09:47.763 Thin Provisioning: Not Supported 00:09:47.763 Per-NS Atomic Units: Yes 00:09:47.763 Atomic Boundary Size (Normal): 0 00:09:47.763 Atomic Boundary Size (PFail): 0 00:09:47.763 Atomic Boundary Offset: 0 00:09:47.763 Maximum Single Source Range Length: 65535 00:09:47.763 Maximum Copy Length: 65535 00:09:47.763 Maximum Source Range Count: 1 00:09:47.763 NGUID/EUI64 Never Reused: No 00:09:47.763 Namespace Write Protected: No 00:09:47.763 Number of LBA Formats: 1 00:09:47.763 Current LBA Format: LBA Format #00 00:09:47.763 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:47.763 00:09:47.763 21:24:13 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:47.763 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.021 [2024-04-24 21:24:13.575456] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:53.297 [2024-04-24 21:24:18.678991] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:53.297 Initializing NVMe Controllers 00:09:53.297 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:53.297 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:09:53.297 Initialization complete. Launching workers. 00:09:53.297 ======================================================== 00:09:53.297 Latency(us) 00:09:53.297 Device Information : IOPS MiB/s Average min max 00:09:53.297 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34041.72 132.98 3759.40 1202.52 7315.97 00:09:53.297 ======================================================== 00:09:53.297 Total : 34041.72 132.98 3759.40 1202.52 7315.97 00:09:53.297 00:09:53.297 21:24:18 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:53.297 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.297 [2024-04-24 21:24:18.923646] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:58.574 [2024-04-24 21:24:23.943870] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:58.574 Initializing NVMe Controllers 00:09:58.574 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:58.574 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:09:58.574 Initialization complete. Launching workers. 00:09:58.574 ======================================================== 00:09:58.574 Latency(us) 00:09:58.574 Device Information : IOPS MiB/s Average min max 00:09:58.574 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32130.44 125.51 3982.94 1229.57 8251.91 00:09:58.574 ======================================================== 00:09:58.575 Total : 32130.44 125.51 3982.94 1229.57 8251.91 00:09:58.575 00:09:58.575 21:24:23 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:09:58.575 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.575 [2024-04-24 21:24:24.155762] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:03.850 [2024-04-24 21:24:29.299022] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:03.850 Initializing NVMe Controllers 00:10:03.850 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:03.850 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:03.850 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:03.850 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:03.850 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:03.850 Initialization complete. Launching workers. 00:10:03.850 Starting thread on core 2 00:10:03.850 Starting thread on core 3 00:10:03.850 Starting thread on core 1 00:10:03.850 21:24:29 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:03.850 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.109 [2024-04-24 21:24:29.608148] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:07.402 [2024-04-24 21:24:32.673648] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:07.402 Initializing NVMe Controllers 00:10:07.402 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:07.402 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:07.402 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:07.402 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:07.402 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:07.402 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:07.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:07.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:07.402 Initialization complete. Launching workers. 00:10:07.402 Starting thread on core 1 with urgent priority queue 00:10:07.402 Starting thread on core 2 with urgent priority queue 00:10:07.402 Starting thread on core 3 with urgent priority queue 00:10:07.402 Starting thread on core 0 with urgent priority queue 00:10:07.402 SPDK bdev Controller (SPDK2 ) core 0: 5059.00 IO/s 19.77 secs/100000 ios 00:10:07.402 SPDK bdev Controller (SPDK2 ) core 1: 5464.33 IO/s 18.30 secs/100000 ios 00:10:07.402 SPDK bdev Controller (SPDK2 ) core 2: 5417.33 IO/s 18.46 secs/100000 ios 00:10:07.402 SPDK bdev Controller (SPDK2 ) core 3: 5878.67 IO/s 17.01 secs/100000 ios 00:10:07.402 ======================================================== 00:10:07.402 00:10:07.402 21:24:32 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:07.402 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.402 [2024-04-24 21:24:32.979135] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:07.402 [2024-04-24 21:24:32.988191] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:07.402 Initializing NVMe Controllers 00:10:07.402 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:07.402 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:07.402 Namespace ID: 1 size: 0GB 00:10:07.402 Initialization complete. 00:10:07.402 INFO: using host memory buffer for IO 00:10:07.402 Hello world! 00:10:07.402 21:24:33 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:07.661 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.661 [2024-04-24 21:24:33.286963] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:09.075 Initializing NVMe Controllers 00:10:09.075 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:09.075 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:09.075 Initialization complete. Launching workers. 00:10:09.075 submit (in ns) avg, min, max = 8080.0, 3471.1, 4015254.4 00:10:09.075 complete (in ns) avg, min, max = 22559.7, 2038.9, 4015202.2 00:10:09.075 00:10:09.075 Submit histogram 00:10:09.075 ================ 00:10:09.075 Range in us Cumulative Count 00:10:09.075 3.461 - 3.484: 0.0523% ( 7) 00:10:09.075 3.484 - 3.508: 0.5980% ( 73) 00:10:09.075 3.508 - 3.532: 2.3471% ( 234) 00:10:09.075 3.532 - 3.556: 5.4492% ( 415) 00:10:09.075 3.556 - 3.579: 12.2589% ( 911) 00:10:09.075 3.579 - 3.603: 20.6608% ( 1124) 00:10:09.075 3.603 - 3.627: 30.7669% ( 1352) 00:10:09.075 3.627 - 3.650: 40.0583% ( 1243) 00:10:09.075 3.650 - 3.674: 47.9818% ( 1060) 00:10:09.075 3.674 - 3.698: 53.8646% ( 787) 00:10:09.075 3.698 - 3.721: 58.2449% ( 586) 00:10:09.075 3.721 - 3.745: 62.4309% ( 560) 00:10:09.075 3.745 - 3.769: 65.6376% ( 429) 00:10:09.075 3.769 - 3.793: 68.8892% ( 435) 00:10:09.075 3.793 - 3.816: 71.8643% ( 398) 00:10:09.075 3.816 - 3.840: 75.6615% ( 508) 00:10:09.075 3.840 - 3.864: 79.8101% ( 555) 00:10:09.075 3.864 - 3.887: 83.2561% ( 461) 00:10:09.075 3.887 - 3.911: 85.9770% ( 364) 00:10:09.075 3.911 - 3.935: 87.9429% ( 263) 00:10:09.075 3.935 - 3.959: 89.4678% ( 204) 00:10:09.075 3.959 - 3.982: 90.8432% ( 184) 00:10:09.075 3.982 - 4.006: 92.1214% ( 171) 00:10:09.075 4.006 - 4.030: 92.9661% ( 113) 00:10:09.075 4.030 - 4.053: 93.7435% ( 104) 00:10:09.075 4.053 - 4.077: 94.6778% ( 125) 00:10:09.075 4.077 - 4.101: 95.3581% ( 91) 00:10:09.075 4.101 - 4.124: 95.9037% ( 73) 00:10:09.075 4.124 - 4.148: 96.1653% ( 35) 00:10:09.075 4.148 - 4.172: 96.4494% ( 38) 00:10:09.075 4.172 - 4.196: 96.6363% ( 25) 00:10:09.075 4.196 - 4.219: 96.7708% ( 18) 00:10:09.075 4.219 - 4.243: 96.9203% ( 20) 00:10:09.075 4.243 - 4.267: 97.0997% ( 24) 00:10:09.075 4.267 - 4.290: 97.2343% ( 18) 00:10:09.075 4.290 - 4.314: 97.3314% ( 13) 00:10:09.075 4.314 - 4.338: 97.4809% ( 20) 00:10:09.075 4.338 - 4.361: 97.5706% ( 12) 00:10:09.075 4.361 - 4.385: 97.6155% ( 6) 00:10:09.075 4.385 - 4.409: 97.6304% ( 2) 00:10:09.075 4.409 - 4.433: 97.6603% ( 4) 00:10:09.075 4.433 - 4.456: 97.6828% ( 3) 00:10:09.075 4.456 - 4.480: 97.7052% ( 3) 00:10:09.075 4.480 - 4.504: 97.7201% ( 2) 00:10:09.075 4.504 - 4.527: 97.7276% ( 1) 00:10:09.075 4.527 - 4.551: 97.7351% ( 1) 00:10:09.075 4.551 - 4.575: 97.7426% ( 1) 00:10:09.075 4.575 - 4.599: 97.7650% ( 3) 00:10:09.075 4.599 - 4.622: 97.7725% ( 1) 00:10:09.075 4.622 - 4.646: 97.7874% ( 2) 00:10:09.075 4.646 - 4.670: 97.7949% ( 1) 00:10:09.075 4.670 - 4.693: 97.8024% ( 1) 00:10:09.075 4.693 - 4.717: 97.8323% ( 4) 00:10:09.075 4.717 - 4.741: 97.8547% ( 3) 00:10:09.075 4.741 - 4.764: 97.8995% ( 6) 00:10:09.075 4.764 - 4.788: 97.9444% ( 6) 00:10:09.075 4.788 - 4.812: 97.9892% ( 6) 00:10:09.075 4.812 - 4.836: 98.0117% ( 3) 00:10:09.075 4.836 - 4.859: 98.0490% ( 5) 00:10:09.075 4.859 - 4.883: 98.0939% ( 6) 00:10:09.075 4.883 - 4.907: 98.1238% ( 4) 00:10:09.075 4.907 - 4.930: 98.1686% ( 6) 00:10:09.075 4.930 - 4.954: 98.1761% ( 1) 00:10:09.075 4.954 - 4.978: 98.2135% ( 5) 00:10:09.075 4.978 - 5.001: 98.2658% ( 7) 00:10:09.075 5.001 - 5.025: 98.3181% ( 7) 00:10:09.075 5.025 - 5.049: 98.3480% ( 4) 00:10:09.075 5.049 - 5.073: 98.3779% ( 4) 00:10:09.076 5.073 - 5.096: 98.4602% ( 11) 00:10:09.076 5.096 - 5.120: 98.4751% ( 2) 00:10:09.076 5.120 - 5.144: 98.4826% ( 1) 00:10:09.076 5.144 - 5.167: 98.4975% ( 2) 00:10:09.076 5.167 - 5.191: 98.5125% ( 2) 00:10:09.076 5.191 - 5.215: 98.5274% ( 2) 00:10:09.076 5.215 - 5.239: 98.5499% ( 3) 00:10:09.076 5.239 - 5.262: 98.5798% ( 4) 00:10:09.076 5.262 - 5.286: 98.5872% ( 1) 00:10:09.076 5.286 - 5.310: 98.5947% ( 1) 00:10:09.076 5.452 - 5.476: 98.6022% ( 1) 00:10:09.076 5.547 - 5.570: 98.6171% ( 2) 00:10:09.076 5.594 - 5.618: 98.6246% ( 1) 00:10:09.076 6.779 - 6.827: 98.6321% ( 1) 00:10:09.076 6.921 - 6.969: 98.6396% ( 1) 00:10:09.076 7.206 - 7.253: 98.6470% ( 1) 00:10:09.076 7.348 - 7.396: 98.6620% ( 2) 00:10:09.076 7.490 - 7.538: 98.6769% ( 2) 00:10:09.076 7.775 - 7.822: 98.6919% ( 2) 00:10:09.076 7.822 - 7.870: 98.7143% ( 3) 00:10:09.076 7.870 - 7.917: 98.7218% ( 1) 00:10:09.076 7.917 - 7.964: 98.7293% ( 1) 00:10:09.076 7.964 - 8.012: 98.7367% ( 1) 00:10:09.076 8.012 - 8.059: 98.7442% ( 1) 00:10:09.076 8.154 - 8.201: 98.7592% ( 2) 00:10:09.076 8.249 - 8.296: 98.7666% ( 1) 00:10:09.076 8.296 - 8.344: 98.7741% ( 1) 00:10:09.076 8.486 - 8.533: 98.7965% ( 3) 00:10:09.076 8.533 - 8.581: 98.8040% ( 1) 00:10:09.076 8.676 - 8.723: 98.8115% ( 1) 00:10:09.076 9.007 - 9.055: 98.8190% ( 1) 00:10:09.076 9.244 - 9.292: 98.8264% ( 1) 00:10:09.076 9.292 - 9.339: 98.8339% ( 1) 00:10:09.076 9.481 - 9.529: 98.8414% ( 1) 00:10:09.076 9.576 - 9.624: 98.8489% ( 1) 00:10:09.076 9.671 - 9.719: 98.8638% ( 2) 00:10:09.076 9.719 - 9.766: 98.8713% ( 1) 00:10:09.076 9.813 - 9.861: 98.8788% ( 1) 00:10:09.076 9.861 - 9.908: 98.8862% ( 1) 00:10:09.076 10.098 - 10.145: 98.8937% ( 1) 00:10:09.076 10.335 - 10.382: 98.9012% ( 1) 00:10:09.076 10.430 - 10.477: 98.9087% ( 1) 00:10:09.076 10.477 - 10.524: 98.9161% ( 1) 00:10:09.076 10.809 - 10.856: 98.9236% ( 1) 00:10:09.076 10.904 - 10.951: 98.9311% ( 1) 00:10:09.076 10.999 - 11.046: 98.9386% ( 1) 00:10:09.076 11.330 - 11.378: 98.9460% ( 1) 00:10:09.076 11.425 - 11.473: 98.9535% ( 1) 00:10:09.076 11.710 - 11.757: 98.9610% ( 1) 00:10:09.076 12.231 - 12.326: 98.9685% ( 1) 00:10:09.076 13.179 - 13.274: 98.9759% ( 1) 00:10:09.076 13.274 - 13.369: 98.9834% ( 1) 00:10:09.076 14.033 - 14.127: 98.9909% ( 1) 00:10:09.076 14.507 - 14.601: 98.9984% ( 1) 00:10:09.076 17.161 - 17.256: 99.0058% ( 1) 00:10:09.076 17.256 - 17.351: 99.0208% ( 2) 00:10:09.076 17.351 - 17.446: 99.0357% ( 2) 00:10:09.076 17.446 - 17.541: 99.0507% ( 2) 00:10:09.076 17.541 - 17.636: 99.0731% ( 3) 00:10:09.076 17.636 - 17.730: 99.1030% ( 4) 00:10:09.076 17.730 - 17.825: 99.1553% ( 7) 00:10:09.076 17.825 - 17.920: 99.1852% ( 4) 00:10:09.076 17.920 - 18.015: 99.2675% ( 11) 00:10:09.076 18.015 - 18.110: 99.3048% ( 5) 00:10:09.076 18.110 - 18.204: 99.3572% ( 7) 00:10:09.076 18.204 - 18.299: 99.4319% ( 10) 00:10:09.076 18.299 - 18.394: 99.4768% ( 6) 00:10:09.076 18.394 - 18.489: 99.5665% ( 12) 00:10:09.076 18.489 - 18.584: 99.6337% ( 9) 00:10:09.076 18.584 - 18.679: 99.6711% ( 5) 00:10:09.076 18.679 - 18.773: 99.7010% ( 4) 00:10:09.076 18.773 - 18.868: 99.7384% ( 5) 00:10:09.076 18.868 - 18.963: 99.7758% ( 5) 00:10:09.076 18.963 - 19.058: 99.7982% ( 3) 00:10:09.076 19.058 - 19.153: 99.8281% ( 4) 00:10:09.076 19.247 - 19.342: 99.8356% ( 1) 00:10:09.076 19.721 - 19.816: 99.8430% ( 1) 00:10:09.076 20.385 - 20.480: 99.8505% ( 1) 00:10:09.076 22.281 - 22.376: 99.8580% ( 1) 00:10:09.076 23.419 - 23.514: 99.8655% ( 1) 00:10:09.076 23.799 - 23.893: 99.8729% ( 1) 00:10:09.076 24.652 - 24.841: 99.8804% ( 1) 00:10:09.076 25.979 - 26.169: 99.8879% ( 1) 00:10:09.076 79.265 - 79.644: 99.8954% ( 1) 00:10:09.076 3980.705 - 4004.978: 99.9925% ( 13) 00:10:09.076 4004.978 - 4029.250: 100.0000% ( 1) 00:10:09.076 00:10:09.076 Complete histogram 00:10:09.076 ================== 00:10:09.076 Range in us Cumulative Count 00:10:09.076 2.039 - 2.050: 2.5714% ( 344) 00:10:09.076 2.050 - 2.062: 10.5995% ( 1074) 00:10:09.076 2.062 - 2.074: 13.0588% ( 329) 00:10:09.076 2.074 - 2.086: 34.8707% ( 2918) 00:10:09.076 2.086 - 2.098: 57.1535% ( 2981) 00:10:09.076 2.098 - 2.110: 61.3171% ( 557) 00:10:09.076 2.110 - 2.121: 65.0022% ( 493) 00:10:09.076 2.121 - 2.133: 67.2148% ( 296) 00:10:09.076 2.133 - 2.145: 68.5080% ( 173) 00:10:09.076 2.145 - 2.157: 76.4315% ( 1060) 00:10:09.076 2.157 - 2.169: 82.0750% ( 755) 00:10:09.076 2.169 - 2.181: 83.2337% ( 155) 00:10:09.076 2.181 - 2.193: 84.4820% ( 167) 00:10:09.076 2.193 - 2.204: 85.6182% ( 152) 00:10:09.076 2.204 - 2.216: 86.4554% ( 112) 00:10:09.076 2.216 - 2.228: 89.8191% ( 450) 00:10:09.076 2.228 - 2.240: 92.5923% ( 371) 00:10:09.076 2.240 - 2.252: 93.5940% ( 134) 00:10:09.076 2.252 - 2.264: 94.2219% ( 84) 00:10:09.076 2.264 - 2.276: 94.4760% ( 34) 00:10:09.076 2.276 - 2.287: 94.7601% ( 38) 00:10:09.076 2.287 - 2.299: 94.9694% ( 28) 00:10:09.076 2.299 - 2.311: 95.3282% ( 48) 00:10:09.076 2.311 - 2.323: 95.5972% ( 36) 00:10:09.076 2.323 - 2.335: 95.7467% ( 20) 00:10:09.076 2.335 - 2.347: 95.8962% ( 20) 00:10:09.076 2.347 - 2.359: 96.0532% ( 21) 00:10:09.076 2.359 - 2.370: 96.2550% ( 27) 00:10:09.076 2.370 - 2.382: 96.5316% ( 37) 00:10:09.076 2.382 - 2.394: 97.0025% ( 63) 00:10:09.076 2.394 - 2.406: 97.3838% ( 51) 00:10:09.076 2.406 - 2.418: 97.5183% ( 18) 00:10:09.076 2.418 - 2.430: 97.7052% ( 25) 00:10:09.076 2.430 - 2.441: 97.8846% ( 24) 00:10:09.076 2.441 - 2.453: 97.9892% ( 14) 00:10:09.076 2.453 - 2.465: 98.1088% ( 16) 00:10:09.076 2.465 - 2.477: 98.2135% ( 14) 00:10:09.076 2.477 - 2.489: 98.2509% ( 5) 00:10:09.076 2.489 - 2.501: 98.3032% ( 7) 00:10:09.076 2.501 - 2.513: 98.3480% ( 6) 00:10:09.076 2.513 - 2.524: 98.3779% ( 4) 00:10:09.076 2.524 - 2.536: 98.3929% ( 2) 00:10:09.076 2.536 - 2.548: 98.4078% ( 2) 00:10:09.076 2.548 - 2.560: 98.4228% ( 2) 00:10:09.076 2.560 - 2.572: 98.4377% ( 2) 00:10:09.076 2.572 - 2.584: 98.4452% ( 1) 00:10:09.076 2.584 - 2.596: 98.4527% ( 1) 00:10:09.076 2.596 - 2.607: 98.4602% ( 1) 00:10:09.076 2.631 - 2.643: 98.4751% ( 2) 00:10:09.076 2.655 - 2.667: 98.4901% ( 2) 00:10:09.076 2.679 - 2.690: 98.5050% ( 2) 00:10:09.076 2.690 - 2.702: 98.5125% ( 1) 00:10:09.076 2.702 - 2.714: 98.5274% ( 2) 00:10:09.076 2.773 - 2.785: 98.5349% ( 1) 00:10:09.076 3.319 - 3.342: 98.5424% ( 1) 00:10:09.076 3.390 - 3.413: 98.5499% ( 1) 00:10:09.076 3.413 - 3.437: 98.5723% ( 3) 00:10:09.076 3.437 - 3.461: 98.5872% ( 2) 00:10:09.076 3.461 - 3.484: 98.5947% ( 1) 00:10:09.076 3.484 - 3.508: 98.6171% ( 3) 00:10:09.076 3.532 - 3.556: 98.6396% ( 3) 00:10:09.076 3.556 - 3.579: 98.6545% ( 2) 00:10:09.076 3.579 - 3.603: 98.6620% ( 1) 00:10:09.076 3.627 - 3.650: 98.6844% ( 3) 00:10:09.076 3.650 - 3.674: 98.7068% ( 3) 00:10:09.076 3.674 - 3.698: 98.7218% ( 2) 00:10:09.076 3.698 - 3.721: 98.7293% ( 1) 00:10:09.076 3.721 - 3.745: 98.7367% ( 1) 00:10:09.076 3.769 - 3.793: 98.7442% ( 1) 00:10:09.076 3.816 - 3.840: 98.7592% ( 2) 00:10:09.076 3.840 - 3.864: 98.7666% ( 1) 00:10:09.076 5.689 - 5.713: 98.7741% ( 1) 00:10:09.076 5.902 - 5.926: 98.7816% ( 1) 00:10:09.076 5.926 - 5.950: 98.7891% ( 1) 00:10:09.076 5.950 - 5.973: 98.7965% ( 1) 00:10:09.076 6.044 - 6.068: 98.8040% ( 1) 00:10:09.076 6.163 - 6.210: 98.8115% ( 1) 00:10:09.076 6.353 - 6.400: 98.8264% ( 2) 00:10:09.076 6.400 - 6.447: 98.8339% ( 1) 00:10:09.076 6.590 - 6.637: 98.8414% ( 1) 00:10:09.076 6.969 - 7.016: 98.8489% ( 1) 00:10:09.076 7.111 - 7.159: 98.8563% ( 1) 00:10:09.076 7.348 - 7.396: 98.8638% ( 1) 00:10:09.076 8.154 - 8.201: 98.8713% ( 1) 00:10:09.076 8.770 - 8.818: 98.8788% ( 1) 00:10:09.076 10.856 - 10.904: 98.8862% ( 1) 00:10:09.076 15.170 - 15.265: 98.8937% ( 1) 00:10:09.077 15.550 - 15.644: 98.9012% ( 1) 00:10:09.077 15.644 - 15.739: 98.9161% ( 2) 00:10:09.077 15.834 - 15.929: 98.9311% ( 2) 00:10:09.077 15.929 - 16.024: 98.9834% ( 7) 00:10:09.077 16.024 - 16.119: 99.0208% ( 5) 00:10:09.077 16.119 - 16.213: 99.0432% ( 3) 00:10:09.077 16.213 - 16.308: 99.0656% ( 3) 00:10:09.077 16.308 - 16.403: 99.0955% ( 4) 00:10:09.077 16.403 - 16.498: 99.1329% ( 5) 00:10:09.077 16.498 - 16.593: 99.2151% ( 11) 00:10:09.077 16.593 - 16.687: 9[2024-04-24 21:24:34.386415] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:09.077 9.2301% ( 2) 00:10:09.077 16.687 - 16.782: 99.2749% ( 6) 00:10:09.077 16.782 - 16.877: 99.2974% ( 3) 00:10:09.077 16.877 - 16.972: 99.3123% ( 2) 00:10:09.077 16.972 - 17.067: 99.3347% ( 3) 00:10:09.077 17.067 - 17.161: 99.3646% ( 4) 00:10:09.077 17.161 - 17.256: 99.3721% ( 1) 00:10:09.077 17.351 - 17.446: 99.3871% ( 2) 00:10:09.077 17.541 - 17.636: 99.4095% ( 3) 00:10:09.077 17.636 - 17.730: 99.4244% ( 2) 00:10:09.077 17.825 - 17.920: 99.4319% ( 1) 00:10:09.077 17.920 - 18.015: 99.4394% ( 1) 00:10:09.077 18.015 - 18.110: 99.4469% ( 1) 00:10:09.077 18.110 - 18.204: 99.4543% ( 1) 00:10:09.077 18.204 - 18.299: 99.4618% ( 1) 00:10:09.077 18.299 - 18.394: 99.4693% ( 1) 00:10:09.077 18.394 - 18.489: 99.4842% ( 2) 00:10:09.077 28.824 - 29.013: 99.4917% ( 1) 00:10:09.077 3980.705 - 4004.978: 99.8729% ( 51) 00:10:09.077 4004.978 - 4029.250: 100.0000% ( 17) 00:10:09.077 00:10:09.077 21:24:34 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:09.077 21:24:34 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:09.077 21:24:34 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:09.077 21:24:34 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:09.077 21:24:34 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:09.077 [ 00:10:09.077 { 00:10:09.077 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:09.077 "subtype": "Discovery", 00:10:09.077 "listen_addresses": [], 00:10:09.077 "allow_any_host": true, 00:10:09.077 "hosts": [] 00:10:09.077 }, 00:10:09.077 { 00:10:09.077 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:09.077 "subtype": "NVMe", 00:10:09.077 "listen_addresses": [ 00:10:09.077 { 00:10:09.077 "transport": "VFIOUSER", 00:10:09.077 "trtype": "VFIOUSER", 00:10:09.077 "adrfam": "IPv4", 00:10:09.077 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:09.077 "trsvcid": "0" 00:10:09.077 } 00:10:09.077 ], 00:10:09.077 "allow_any_host": true, 00:10:09.077 "hosts": [], 00:10:09.077 "serial_number": "SPDK1", 00:10:09.077 "model_number": "SPDK bdev Controller", 00:10:09.077 "max_namespaces": 32, 00:10:09.077 "min_cntlid": 1, 00:10:09.077 "max_cntlid": 65519, 00:10:09.077 "namespaces": [ 00:10:09.077 { 00:10:09.077 "nsid": 1, 00:10:09.077 "bdev_name": "Malloc1", 00:10:09.077 "name": "Malloc1", 00:10:09.077 "nguid": "285DDECE4712471CB8BC9DCD76CDC454", 00:10:09.077 "uuid": "285ddece-4712-471c-b8bc-9dcd76cdc454" 00:10:09.077 }, 00:10:09.077 { 00:10:09.077 "nsid": 2, 00:10:09.077 "bdev_name": "Malloc3", 00:10:09.077 "name": "Malloc3", 00:10:09.077 "nguid": "5AF43DC50AEA4A66806C3975C8E1FBB6", 00:10:09.077 "uuid": "5af43dc5-0aea-4a66-806c-3975c8e1fbb6" 00:10:09.077 } 00:10:09.077 ] 00:10:09.077 }, 00:10:09.077 { 00:10:09.077 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:09.077 "subtype": "NVMe", 00:10:09.077 "listen_addresses": [ 00:10:09.077 { 00:10:09.077 "transport": "VFIOUSER", 00:10:09.077 "trtype": "VFIOUSER", 00:10:09.077 "adrfam": "IPv4", 00:10:09.077 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:09.077 "trsvcid": "0" 00:10:09.077 } 00:10:09.077 ], 00:10:09.077 "allow_any_host": true, 00:10:09.077 "hosts": [], 00:10:09.077 "serial_number": "SPDK2", 00:10:09.077 "model_number": "SPDK bdev Controller", 00:10:09.077 "max_namespaces": 32, 00:10:09.077 "min_cntlid": 1, 00:10:09.077 "max_cntlid": 65519, 00:10:09.077 "namespaces": [ 00:10:09.077 { 00:10:09.077 "nsid": 1, 00:10:09.077 "bdev_name": "Malloc2", 00:10:09.077 "name": "Malloc2", 00:10:09.077 "nguid": "FFB6A5E44506473799980A505DE587DB", 00:10:09.077 "uuid": "ffb6a5e4-4506-4737-9998-0a505de587db" 00:10:09.077 } 00:10:09.077 ] 00:10:09.077 } 00:10:09.077 ] 00:10:09.077 21:24:34 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:09.077 21:24:34 -- target/nvmf_vfio_user.sh@34 -- # aerpid=2554980 00:10:09.077 21:24:34 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:09.077 21:24:34 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:09.077 21:24:34 -- common/autotest_common.sh@1251 -- # local i=0 00:10:09.077 21:24:34 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:09.077 21:24:34 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:09.077 21:24:34 -- common/autotest_common.sh@1262 -- # return 0 00:10:09.077 21:24:34 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:09.077 21:24:34 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:09.336 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.336 [2024-04-24 21:24:34.880112] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:09.336 Malloc4 00:10:09.336 21:24:34 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:09.593 [2024-04-24 21:24:35.225690] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:09.593 21:24:35 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:09.851 Asynchronous Event Request test 00:10:09.851 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:09.851 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:09.851 Registering asynchronous event callbacks... 00:10:09.851 Starting namespace attribute notice tests for all controllers... 00:10:09.851 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:09.851 aer_cb - Changed Namespace 00:10:09.851 Cleaning up... 00:10:09.851 [ 00:10:09.851 { 00:10:09.851 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:09.851 "subtype": "Discovery", 00:10:09.851 "listen_addresses": [], 00:10:09.851 "allow_any_host": true, 00:10:09.851 "hosts": [] 00:10:09.851 }, 00:10:09.851 { 00:10:09.851 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:09.851 "subtype": "NVMe", 00:10:09.851 "listen_addresses": [ 00:10:09.851 { 00:10:09.851 "transport": "VFIOUSER", 00:10:09.851 "trtype": "VFIOUSER", 00:10:09.851 "adrfam": "IPv4", 00:10:09.851 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:09.851 "trsvcid": "0" 00:10:09.851 } 00:10:09.851 ], 00:10:09.851 "allow_any_host": true, 00:10:09.851 "hosts": [], 00:10:09.851 "serial_number": "SPDK1", 00:10:09.851 "model_number": "SPDK bdev Controller", 00:10:09.851 "max_namespaces": 32, 00:10:09.851 "min_cntlid": 1, 00:10:09.851 "max_cntlid": 65519, 00:10:09.851 "namespaces": [ 00:10:09.851 { 00:10:09.851 "nsid": 1, 00:10:09.851 "bdev_name": "Malloc1", 00:10:09.851 "name": "Malloc1", 00:10:09.851 "nguid": "285DDECE4712471CB8BC9DCD76CDC454", 00:10:09.851 "uuid": "285ddece-4712-471c-b8bc-9dcd76cdc454" 00:10:09.851 }, 00:10:09.851 { 00:10:09.851 "nsid": 2, 00:10:09.851 "bdev_name": "Malloc3", 00:10:09.851 "name": "Malloc3", 00:10:09.851 "nguid": "5AF43DC50AEA4A66806C3975C8E1FBB6", 00:10:09.851 "uuid": "5af43dc5-0aea-4a66-806c-3975c8e1fbb6" 00:10:09.851 } 00:10:09.851 ] 00:10:09.851 }, 00:10:09.851 { 00:10:09.851 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:09.851 "subtype": "NVMe", 00:10:09.851 "listen_addresses": [ 00:10:09.851 { 00:10:09.851 "transport": "VFIOUSER", 00:10:09.851 "trtype": "VFIOUSER", 00:10:09.851 "adrfam": "IPv4", 00:10:09.851 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:09.851 "trsvcid": "0" 00:10:09.851 } 00:10:09.851 ], 00:10:09.851 "allow_any_host": true, 00:10:09.851 "hosts": [], 00:10:09.851 "serial_number": "SPDK2", 00:10:09.851 "model_number": "SPDK bdev Controller", 00:10:09.851 "max_namespaces": 32, 00:10:09.851 "min_cntlid": 1, 00:10:09.851 "max_cntlid": 65519, 00:10:09.851 "namespaces": [ 00:10:09.851 { 00:10:09.851 "nsid": 1, 00:10:09.851 "bdev_name": "Malloc2", 00:10:09.851 "name": "Malloc2", 00:10:09.851 "nguid": "FFB6A5E44506473799980A505DE587DB", 00:10:09.851 "uuid": "ffb6a5e4-4506-4737-9998-0a505de587db" 00:10:09.851 }, 00:10:09.851 { 00:10:09.851 "nsid": 2, 00:10:09.851 "bdev_name": "Malloc4", 00:10:09.851 "name": "Malloc4", 00:10:09.851 "nguid": "FF495AA844BA4891B20673952EB80F00", 00:10:09.851 "uuid": "ff495aa8-44ba-4891-b206-73952eb80f00" 00:10:09.851 } 00:10:09.851 ] 00:10:09.851 } 00:10:09.851 ] 00:10:09.851 21:24:35 -- target/nvmf_vfio_user.sh@44 -- # wait 2554980 00:10:09.851 21:24:35 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:09.851 21:24:35 -- target/nvmf_vfio_user.sh@95 -- # killprocess 2548746 00:10:09.851 21:24:35 -- common/autotest_common.sh@936 -- # '[' -z 2548746 ']' 00:10:09.851 21:24:35 -- common/autotest_common.sh@940 -- # kill -0 2548746 00:10:09.851 21:24:35 -- common/autotest_common.sh@941 -- # uname 00:10:09.851 21:24:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:09.851 21:24:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2548746 00:10:09.851 21:24:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:09.851 21:24:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:09.851 21:24:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2548746' 00:10:09.851 killing process with pid 2548746 00:10:09.851 21:24:35 -- common/autotest_common.sh@955 -- # kill 2548746 00:10:09.851 [2024-04-24 21:24:35.504823] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:10:09.851 21:24:35 -- common/autotest_common.sh@960 -- # wait 2548746 00:10:10.420 21:24:35 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:10.420 21:24:35 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:10.420 21:24:35 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:10.420 21:24:35 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:10.420 21:24:35 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:10.420 21:24:35 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2555126 00:10:10.420 21:24:35 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:10.420 21:24:35 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2555126' 00:10:10.420 Process pid: 2555126 00:10:10.420 21:24:35 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:10.420 21:24:35 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2555126 00:10:10.420 21:24:35 -- common/autotest_common.sh@817 -- # '[' -z 2555126 ']' 00:10:10.420 21:24:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.420 21:24:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:10.420 21:24:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.420 21:24:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:10.420 21:24:35 -- common/autotest_common.sh@10 -- # set +x 00:10:10.420 [2024-04-24 21:24:35.935152] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:10.420 [2024-04-24 21:24:35.936208] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:10:10.420 [2024-04-24 21:24:35.936269] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.420 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.420 [2024-04-24 21:24:36.000599] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:10.678 [2024-04-24 21:24:36.117106] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.678 [2024-04-24 21:24:36.117169] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.678 [2024-04-24 21:24:36.117186] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.678 [2024-04-24 21:24:36.117199] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.678 [2024-04-24 21:24:36.117211] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.678 [2024-04-24 21:24:36.117302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.678 [2024-04-24 21:24:36.117378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.678 [2024-04-24 21:24:36.117408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:10.678 [2024-04-24 21:24:36.117410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.678 [2024-04-24 21:24:36.230260] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:10:10.678 [2024-04-24 21:24:36.230497] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:10:10.678 [2024-04-24 21:24:36.230765] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:10:10.678 [2024-04-24 21:24:36.231486] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:10.678 [2024-04-24 21:24:36.231608] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:10:11.247 21:24:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:11.247 21:24:36 -- common/autotest_common.sh@850 -- # return 0 00:10:11.247 21:24:36 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:12.626 21:24:37 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:12.626 21:24:38 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:12.626 21:24:38 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:12.626 21:24:38 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:12.626 21:24:38 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:12.626 21:24:38 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:12.885 Malloc1 00:10:12.885 21:24:38 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:13.143 21:24:38 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:13.401 21:24:38 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:13.660 21:24:39 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:13.660 21:24:39 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:13.660 21:24:39 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:13.918 Malloc2 00:10:13.918 21:24:39 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:14.176 21:24:39 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:14.434 21:24:39 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:14.693 21:24:40 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:14.693 21:24:40 -- target/nvmf_vfio_user.sh@95 -- # killprocess 2555126 00:10:14.693 21:24:40 -- common/autotest_common.sh@936 -- # '[' -z 2555126 ']' 00:10:14.693 21:24:40 -- common/autotest_common.sh@940 -- # kill -0 2555126 00:10:14.693 21:24:40 -- common/autotest_common.sh@941 -- # uname 00:10:14.693 21:24:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:14.693 21:24:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2555126 00:10:14.693 21:24:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:14.693 21:24:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:14.693 21:24:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2555126' 00:10:14.693 killing process with pid 2555126 00:10:14.693 21:24:40 -- common/autotest_common.sh@955 -- # kill 2555126 00:10:14.693 21:24:40 -- common/autotest_common.sh@960 -- # wait 2555126 00:10:14.951 21:24:40 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:14.951 21:24:40 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:14.951 00:10:14.951 real 0m53.544s 00:10:14.951 user 3m30.989s 00:10:14.951 sys 0m4.613s 00:10:14.951 21:24:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:14.951 21:24:40 -- common/autotest_common.sh@10 -- # set +x 00:10:14.951 ************************************ 00:10:14.951 END TEST nvmf_vfio_user 00:10:14.951 ************************************ 00:10:14.951 21:24:40 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:14.951 21:24:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:14.951 21:24:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:14.951 21:24:40 -- common/autotest_common.sh@10 -- # set +x 00:10:14.951 ************************************ 00:10:14.951 START TEST nvmf_vfio_user_nvme_compliance 00:10:14.951 ************************************ 00:10:14.951 21:24:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:15.211 * Looking for test storage... 00:10:15.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:15.211 21:24:40 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.211 21:24:40 -- nvmf/common.sh@7 -- # uname -s 00:10:15.211 21:24:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.211 21:24:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.211 21:24:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.211 21:24:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.211 21:24:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.211 21:24:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.211 21:24:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.211 21:24:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.211 21:24:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.211 21:24:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.211 21:24:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:15.211 21:24:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:15.211 21:24:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.211 21:24:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.211 21:24:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:15.211 21:24:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.211 21:24:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:15.211 21:24:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.211 21:24:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.211 21:24:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.211 21:24:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.211 21:24:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.211 21:24:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.211 21:24:40 -- paths/export.sh@5 -- # export PATH 00:10:15.211 21:24:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.211 21:24:40 -- nvmf/common.sh@47 -- # : 0 00:10:15.211 21:24:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:15.211 21:24:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:15.211 21:24:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.211 21:24:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.211 21:24:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.211 21:24:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:15.211 21:24:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:15.211 21:24:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:15.211 21:24:40 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:15.211 21:24:40 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:15.211 21:24:40 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:15.211 21:24:40 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:15.211 21:24:40 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:15.211 21:24:40 -- compliance/compliance.sh@20 -- # nvmfpid=2555857 00:10:15.211 21:24:40 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:15.211 21:24:40 -- compliance/compliance.sh@21 -- # echo 'Process pid: 2555857' 00:10:15.211 Process pid: 2555857 00:10:15.211 21:24:40 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:15.211 21:24:40 -- compliance/compliance.sh@24 -- # waitforlisten 2555857 00:10:15.211 21:24:40 -- common/autotest_common.sh@817 -- # '[' -z 2555857 ']' 00:10:15.211 21:24:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.211 21:24:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:15.211 21:24:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.211 21:24:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:15.211 21:24:40 -- common/autotest_common.sh@10 -- # set +x 00:10:15.211 [2024-04-24 21:24:40.731743] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:10:15.211 [2024-04-24 21:24:40.731835] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.211 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.211 [2024-04-24 21:24:40.798429] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:15.472 [2024-04-24 21:24:40.917531] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.472 [2024-04-24 21:24:40.917602] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.472 [2024-04-24 21:24:40.917619] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.472 [2024-04-24 21:24:40.917642] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.472 [2024-04-24 21:24:40.917655] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.472 [2024-04-24 21:24:40.921656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.472 [2024-04-24 21:24:40.921694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.472 [2024-04-24 21:24:40.921698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.472 21:24:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:15.472 21:24:41 -- common/autotest_common.sh@850 -- # return 0 00:10:15.472 21:24:41 -- compliance/compliance.sh@26 -- # sleep 1 00:10:16.409 21:24:42 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:16.410 21:24:42 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:16.410 21:24:42 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:16.410 21:24:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:16.410 21:24:42 -- common/autotest_common.sh@10 -- # set +x 00:10:16.410 21:24:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:16.410 21:24:42 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:16.410 21:24:42 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:16.410 21:24:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:16.410 21:24:42 -- common/autotest_common.sh@10 -- # set +x 00:10:16.668 malloc0 00:10:16.668 21:24:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:16.668 21:24:42 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:16.668 21:24:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:16.668 21:24:42 -- common/autotest_common.sh@10 -- # set +x 00:10:16.668 21:24:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:16.668 21:24:42 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:16.668 21:24:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:16.668 21:24:42 -- common/autotest_common.sh@10 -- # set +x 00:10:16.668 21:24:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:16.668 21:24:42 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:16.668 21:24:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:16.668 21:24:42 -- common/autotest_common.sh@10 -- # set +x 00:10:16.668 21:24:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:16.668 21:24:42 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:16.668 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.668 00:10:16.668 00:10:16.668 CUnit - A unit testing framework for C - Version 2.1-3 00:10:16.668 http://cunit.sourceforge.net/ 00:10:16.668 00:10:16.668 00:10:16.668 Suite: nvme_compliance 00:10:16.668 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-24 21:24:42.276208] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:16.668 [2024-04-24 21:24:42.277648] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:16.668 [2024-04-24 21:24:42.277674] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:16.668 [2024-04-24 21:24:42.277687] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:16.668 [2024-04-24 21:24:42.279227] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:16.668 passed 00:10:16.929 Test: admin_identify_ctrlr_verify_fused ...[2024-04-24 21:24:42.364821] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:16.929 [2024-04-24 21:24:42.367843] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:16.929 passed 00:10:16.929 Test: admin_identify_ns ...[2024-04-24 21:24:42.454119] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:16.929 [2024-04-24 21:24:42.514644] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:16.929 [2024-04-24 21:24:42.522647] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:16.929 [2024-04-24 21:24:42.543769] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:16.929 passed 00:10:17.187 Test: admin_get_features_mandatory_features ...[2024-04-24 21:24:42.627640] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:17.188 [2024-04-24 21:24:42.630666] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:17.188 passed 00:10:17.188 Test: admin_get_features_optional_features ...[2024-04-24 21:24:42.716225] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:17.188 [2024-04-24 21:24:42.719246] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:17.188 passed 00:10:17.188 Test: admin_set_features_number_of_queues ...[2024-04-24 21:24:42.799118] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:17.445 [2024-04-24 21:24:42.908752] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:17.445 passed 00:10:17.445 Test: admin_get_log_page_mandatory_logs ...[2024-04-24 21:24:42.991343] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:17.445 [2024-04-24 21:24:42.994372] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:17.445 passed 00:10:17.445 Test: admin_get_log_page_with_lpo ...[2024-04-24 21:24:43.076156] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:17.703 [2024-04-24 21:24:43.144644] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:17.703 [2024-04-24 21:24:43.157721] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:17.703 passed 00:10:17.703 Test: fabric_property_get ...[2024-04-24 21:24:43.240243] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:17.703 [2024-04-24 21:24:43.241500] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:17.703 [2024-04-24 21:24:43.243265] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:17.703 passed 00:10:17.703 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-24 21:24:43.326774] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:17.703 [2024-04-24 21:24:43.328079] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:17.703 [2024-04-24 21:24:43.329800] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:17.703 passed 00:10:17.962 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-24 21:24:43.414169] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:17.962 [2024-04-24 21:24:43.497639] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:17.962 [2024-04-24 21:24:43.513653] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:17.962 [2024-04-24 21:24:43.518749] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:17.962 passed 00:10:17.962 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-24 21:24:43.602526] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:17.962 [2024-04-24 21:24:43.603814] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:17.962 [2024-04-24 21:24:43.605549] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:17.962 passed 00:10:18.222 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-24 21:24:43.690696] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:18.222 [2024-04-24 21:24:43.767650] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:18.222 [2024-04-24 21:24:43.791636] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:18.222 [2024-04-24 21:24:43.796767] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:18.222 passed 00:10:18.222 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-24 21:24:43.879350] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:18.222 [2024-04-24 21:24:43.880681] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:18.222 [2024-04-24 21:24:43.880722] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:18.222 [2024-04-24 21:24:43.882379] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:18.482 passed 00:10:18.482 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-24 21:24:43.963510] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:18.482 [2024-04-24 21:24:44.056643] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:18.482 [2024-04-24 21:24:44.064642] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:18.482 [2024-04-24 21:24:44.071642] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:18.482 [2024-04-24 21:24:44.080645] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:18.482 [2024-04-24 21:24:44.109757] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:18.482 passed 00:10:18.742 Test: admin_create_io_sq_verify_pc ...[2024-04-24 21:24:44.193350] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:18.742 [2024-04-24 21:24:44.209657] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:18.742 [2024-04-24 21:24:44.226759] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:18.742 passed 00:10:18.742 Test: admin_create_io_qp_max_qps ...[2024-04-24 21:24:44.309319] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:20.120 [2024-04-24 21:24:45.404645] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:20.120 [2024-04-24 21:24:45.788041] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:20.378 passed 00:10:20.378 Test: admin_create_io_sq_shared_cq ...[2024-04-24 21:24:45.872275] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:20.378 [2024-04-24 21:24:46.003641] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:20.378 [2024-04-24 21:24:46.040724] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:20.637 passed 00:10:20.637 00:10:20.637 Run Summary: Type Total Ran Passed Failed Inactive 00:10:20.637 suites 1 1 n/a 0 0 00:10:20.637 tests 18 18 18 0 0 00:10:20.637 asserts 360 360 360 0 n/a 00:10:20.637 00:10:20.637 Elapsed time = 1.559 seconds 00:10:20.637 21:24:46 -- compliance/compliance.sh@42 -- # killprocess 2555857 00:10:20.637 21:24:46 -- common/autotest_common.sh@936 -- # '[' -z 2555857 ']' 00:10:20.637 21:24:46 -- common/autotest_common.sh@940 -- # kill -0 2555857 00:10:20.637 21:24:46 -- common/autotest_common.sh@941 -- # uname 00:10:20.637 21:24:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:20.637 21:24:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2555857 00:10:20.637 21:24:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:20.637 21:24:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:20.637 21:24:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2555857' 00:10:20.637 killing process with pid 2555857 00:10:20.637 21:24:46 -- common/autotest_common.sh@955 -- # kill 2555857 00:10:20.637 21:24:46 -- common/autotest_common.sh@960 -- # wait 2555857 00:10:20.894 21:24:46 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:20.894 21:24:46 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:20.894 00:10:20.894 real 0m5.787s 00:10:20.894 user 0m16.157s 00:10:20.894 sys 0m0.544s 00:10:20.894 21:24:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:20.894 21:24:46 -- common/autotest_common.sh@10 -- # set +x 00:10:20.894 ************************************ 00:10:20.894 END TEST nvmf_vfio_user_nvme_compliance 00:10:20.894 ************************************ 00:10:20.894 21:24:46 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:20.894 21:24:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:20.894 21:24:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:20.894 21:24:46 -- common/autotest_common.sh@10 -- # set +x 00:10:20.894 ************************************ 00:10:20.894 START TEST nvmf_vfio_user_fuzz 00:10:20.894 ************************************ 00:10:20.894 21:24:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:21.152 * Looking for test storage... 00:10:21.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.152 21:24:46 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:21.152 21:24:46 -- nvmf/common.sh@7 -- # uname -s 00:10:21.152 21:24:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.152 21:24:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.152 21:24:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.152 21:24:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.152 21:24:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.152 21:24:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.152 21:24:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.152 21:24:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.152 21:24:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.152 21:24:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.152 21:24:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:21.152 21:24:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:21.152 21:24:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.152 21:24:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.152 21:24:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:21.152 21:24:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.152 21:24:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:21.152 21:24:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.152 21:24:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.152 21:24:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.152 21:24:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.152 21:24:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.152 21:24:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.152 21:24:46 -- paths/export.sh@5 -- # export PATH 00:10:21.153 21:24:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.153 21:24:46 -- nvmf/common.sh@47 -- # : 0 00:10:21.153 21:24:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:21.153 21:24:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:21.153 21:24:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.153 21:24:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.153 21:24:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.153 21:24:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:21.153 21:24:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:21.153 21:24:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:21.153 21:24:46 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:21.153 21:24:46 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:21.153 21:24:46 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:21.153 21:24:46 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:21.153 21:24:46 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:21.153 21:24:46 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:21.153 21:24:46 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:21.153 21:24:46 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2556589 00:10:21.153 21:24:46 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:21.153 21:24:46 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2556589' 00:10:21.153 Process pid: 2556589 00:10:21.153 21:24:46 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:21.153 21:24:46 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2556589 00:10:21.153 21:24:46 -- common/autotest_common.sh@817 -- # '[' -z 2556589 ']' 00:10:21.153 21:24:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.153 21:24:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:21.153 21:24:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.153 21:24:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:21.153 21:24:46 -- common/autotest_common.sh@10 -- # set +x 00:10:22.090 21:24:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:22.090 21:24:47 -- common/autotest_common.sh@850 -- # return 0 00:10:22.090 21:24:47 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:23.053 21:24:48 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:23.053 21:24:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:23.053 21:24:48 -- common/autotest_common.sh@10 -- # set +x 00:10:23.053 21:24:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:23.053 21:24:48 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:23.053 21:24:48 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:23.053 21:24:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:23.053 21:24:48 -- common/autotest_common.sh@10 -- # set +x 00:10:23.053 malloc0 00:10:23.053 21:24:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:23.053 21:24:48 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:23.053 21:24:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:23.053 21:24:48 -- common/autotest_common.sh@10 -- # set +x 00:10:23.053 21:24:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:23.053 21:24:48 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:23.053 21:24:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:23.053 21:24:48 -- common/autotest_common.sh@10 -- # set +x 00:10:23.053 21:24:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:23.053 21:24:48 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:23.053 21:24:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:23.053 21:24:48 -- common/autotest_common.sh@10 -- # set +x 00:10:23.053 21:24:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:23.053 21:24:48 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:23.053 21:24:48 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:10:55.133 Fuzzing completed. Shutting down the fuzz application 00:10:55.133 00:10:55.133 Dumping successful admin opcodes: 00:10:55.133 8, 9, 10, 24, 00:10:55.133 Dumping successful io opcodes: 00:10:55.133 0, 00:10:55.133 NS: 0x200003a1ef00 I/O qp, Total commands completed: 567976, total successful commands: 2183, random_seed: 2771745856 00:10:55.133 NS: 0x200003a1ef00 admin qp, Total commands completed: 72158, total successful commands: 568, random_seed: 2317145920 00:10:55.133 21:25:19 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:10:55.133 21:25:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.133 21:25:19 -- common/autotest_common.sh@10 -- # set +x 00:10:55.133 21:25:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.133 21:25:19 -- target/vfio_user_fuzz.sh@46 -- # killprocess 2556589 00:10:55.133 21:25:19 -- common/autotest_common.sh@936 -- # '[' -z 2556589 ']' 00:10:55.133 21:25:19 -- common/autotest_common.sh@940 -- # kill -0 2556589 00:10:55.133 21:25:19 -- common/autotest_common.sh@941 -- # uname 00:10:55.133 21:25:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:55.133 21:25:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2556589 00:10:55.133 21:25:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:55.133 21:25:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:55.133 21:25:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2556589' 00:10:55.133 killing process with pid 2556589 00:10:55.133 21:25:19 -- common/autotest_common.sh@955 -- # kill 2556589 00:10:55.133 21:25:19 -- common/autotest_common.sh@960 -- # wait 2556589 00:10:55.133 21:25:19 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:10:55.133 21:25:19 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:10:55.133 00:10:55.133 real 0m33.038s 00:10:55.133 user 0m32.259s 00:10:55.133 sys 0m28.773s 00:10:55.133 21:25:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:55.133 21:25:19 -- common/autotest_common.sh@10 -- # set +x 00:10:55.133 ************************************ 00:10:55.133 END TEST nvmf_vfio_user_fuzz 00:10:55.133 ************************************ 00:10:55.133 21:25:19 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:55.133 21:25:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:55.133 21:25:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:55.133 21:25:19 -- common/autotest_common.sh@10 -- # set +x 00:10:55.133 ************************************ 00:10:55.133 START TEST nvmf_host_management 00:10:55.133 ************************************ 00:10:55.133 21:25:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:55.133 * Looking for test storage... 00:10:55.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.133 21:25:19 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.133 21:25:19 -- nvmf/common.sh@7 -- # uname -s 00:10:55.133 21:25:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.133 21:25:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.133 21:25:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.133 21:25:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.133 21:25:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.133 21:25:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.133 21:25:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.133 21:25:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.133 21:25:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.133 21:25:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.133 21:25:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:55.133 21:25:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:55.133 21:25:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.133 21:25:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.133 21:25:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.133 21:25:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.133 21:25:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.133 21:25:19 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.133 21:25:19 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.133 21:25:19 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.133 21:25:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.134 21:25:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.134 21:25:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.134 21:25:19 -- paths/export.sh@5 -- # export PATH 00:10:55.134 21:25:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.134 21:25:19 -- nvmf/common.sh@47 -- # : 0 00:10:55.134 21:25:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:55.134 21:25:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:55.134 21:25:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.134 21:25:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.134 21:25:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.134 21:25:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:55.134 21:25:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:55.134 21:25:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:55.134 21:25:19 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:55.134 21:25:19 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:55.134 21:25:19 -- target/host_management.sh@105 -- # nvmftestinit 00:10:55.134 21:25:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:55.134 21:25:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.134 21:25:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:55.134 21:25:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:55.134 21:25:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:55.134 21:25:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.134 21:25:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:55.134 21:25:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.134 21:25:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:55.134 21:25:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:55.134 21:25:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:55.134 21:25:19 -- common/autotest_common.sh@10 -- # set +x 00:10:56.071 21:25:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:56.071 21:25:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:56.071 21:25:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:56.071 21:25:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:56.071 21:25:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:56.071 21:25:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:56.071 21:25:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:56.071 21:25:21 -- nvmf/common.sh@295 -- # net_devs=() 00:10:56.071 21:25:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:56.071 21:25:21 -- nvmf/common.sh@296 -- # e810=() 00:10:56.071 21:25:21 -- nvmf/common.sh@296 -- # local -ga e810 00:10:56.071 21:25:21 -- nvmf/common.sh@297 -- # x722=() 00:10:56.071 21:25:21 -- nvmf/common.sh@297 -- # local -ga x722 00:10:56.071 21:25:21 -- nvmf/common.sh@298 -- # mlx=() 00:10:56.071 21:25:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:56.071 21:25:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:56.071 21:25:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:56.071 21:25:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:56.071 21:25:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:56.071 21:25:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:56.071 21:25:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:56.071 21:25:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:56.071 21:25:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:56.071 21:25:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:56.071 21:25:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:56.071 21:25:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:56.071 21:25:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:56.071 21:25:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:56.071 21:25:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:56.071 21:25:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:56.071 21:25:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:56.071 21:25:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:56.071 21:25:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:56.071 21:25:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:56.071 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:56.071 21:25:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:56.071 21:25:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:56.071 21:25:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.071 21:25:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.071 21:25:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:56.071 21:25:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:56.071 21:25:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:56.071 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:56.071 21:25:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:56.071 21:25:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:56.071 21:25:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.071 21:25:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.071 21:25:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:56.071 21:25:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:56.071 21:25:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:56.071 21:25:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:56.071 21:25:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:56.071 21:25:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.071 21:25:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:56.071 21:25:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.071 21:25:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:56.071 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:56.071 21:25:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.071 21:25:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:56.071 21:25:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.071 21:25:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:56.071 21:25:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.071 21:25:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:56.071 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:56.071 21:25:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.071 21:25:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:56.071 21:25:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:56.071 21:25:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:56.071 21:25:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:56.071 21:25:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:56.071 21:25:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.071 21:25:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.071 21:25:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:56.071 21:25:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:56.071 21:25:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:56.071 21:25:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:56.071 21:25:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:56.071 21:25:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:56.071 21:25:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.071 21:25:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:56.071 21:25:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:56.071 21:25:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:56.071 21:25:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:56.329 21:25:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:56.329 21:25:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:56.329 21:25:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:56.329 21:25:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:56.329 21:25:21 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:56.329 21:25:21 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:56.329 21:25:21 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:56.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:10:56.329 00:10:56.329 --- 10.0.0.2 ping statistics --- 00:10:56.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.329 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:10:56.329 21:25:21 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:56.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:10:56.329 00:10:56.329 --- 10.0.0.1 ping statistics --- 00:10:56.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.329 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:10:56.330 21:25:21 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.330 21:25:21 -- nvmf/common.sh@411 -- # return 0 00:10:56.330 21:25:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:56.330 21:25:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.330 21:25:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:56.330 21:25:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:56.330 21:25:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.330 21:25:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:56.330 21:25:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:56.330 21:25:21 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:10:56.330 21:25:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:56.330 21:25:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:56.330 21:25:21 -- common/autotest_common.sh@10 -- # set +x 00:10:56.330 ************************************ 00:10:56.330 START TEST nvmf_host_management 00:10:56.330 ************************************ 00:10:56.330 21:25:21 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:10:56.330 21:25:21 -- target/host_management.sh@69 -- # starttarget 00:10:56.330 21:25:21 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:56.330 21:25:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:56.330 21:25:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:56.330 21:25:21 -- common/autotest_common.sh@10 -- # set +x 00:10:56.330 21:25:21 -- nvmf/common.sh@470 -- # nvmfpid=2562076 00:10:56.330 21:25:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:56.330 21:25:21 -- nvmf/common.sh@471 -- # waitforlisten 2562076 00:10:56.330 21:25:21 -- common/autotest_common.sh@817 -- # '[' -z 2562076 ']' 00:10:56.330 21:25:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.330 21:25:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:56.330 21:25:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.330 21:25:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:56.330 21:25:21 -- common/autotest_common.sh@10 -- # set +x 00:10:56.588 [2024-04-24 21:25:22.032198] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:10:56.588 [2024-04-24 21:25:22.032279] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.588 EAL: No free 2048 kB hugepages reported on node 1 00:10:56.588 [2024-04-24 21:25:22.103595] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.588 [2024-04-24 21:25:22.227165] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.588 [2024-04-24 21:25:22.227228] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.588 [2024-04-24 21:25:22.227245] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.588 [2024-04-24 21:25:22.227259] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.588 [2024-04-24 21:25:22.227271] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.588 [2024-04-24 21:25:22.227359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.588 [2024-04-24 21:25:22.227419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.588 [2024-04-24 21:25:22.227482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:56.588 [2024-04-24 21:25:22.227485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.521 21:25:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:57.521 21:25:22 -- common/autotest_common.sh@850 -- # return 0 00:10:57.521 21:25:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:57.521 21:25:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:57.521 21:25:22 -- common/autotest_common.sh@10 -- # set +x 00:10:57.521 21:25:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.521 21:25:22 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:57.521 21:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:57.521 21:25:22 -- common/autotest_common.sh@10 -- # set +x 00:10:57.521 [2024-04-24 21:25:22.980422] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.521 21:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:57.521 21:25:22 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:57.521 21:25:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:57.521 21:25:22 -- common/autotest_common.sh@10 -- # set +x 00:10:57.521 21:25:22 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:57.521 21:25:22 -- target/host_management.sh@23 -- # cat 00:10:57.521 21:25:22 -- target/host_management.sh@30 -- # rpc_cmd 00:10:57.521 21:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:57.521 21:25:22 -- common/autotest_common.sh@10 -- # set +x 00:10:57.521 Malloc0 00:10:57.521 [2024-04-24 21:25:23.039436] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.521 21:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:57.521 21:25:23 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:57.521 21:25:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:57.521 21:25:23 -- common/autotest_common.sh@10 -- # set +x 00:10:57.521 21:25:23 -- target/host_management.sh@73 -- # perfpid=2562250 00:10:57.521 21:25:23 -- target/host_management.sh@74 -- # waitforlisten 2562250 /var/tmp/bdevperf.sock 00:10:57.521 21:25:23 -- common/autotest_common.sh@817 -- # '[' -z 2562250 ']' 00:10:57.521 21:25:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:57.521 21:25:23 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:57.521 21:25:23 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:57.521 21:25:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:57.521 21:25:23 -- nvmf/common.sh@521 -- # config=() 00:10:57.521 21:25:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:57.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:57.521 21:25:23 -- nvmf/common.sh@521 -- # local subsystem config 00:10:57.522 21:25:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:57.522 21:25:23 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:10:57.522 21:25:23 -- common/autotest_common.sh@10 -- # set +x 00:10:57.522 21:25:23 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:10:57.522 { 00:10:57.522 "params": { 00:10:57.522 "name": "Nvme$subsystem", 00:10:57.522 "trtype": "$TEST_TRANSPORT", 00:10:57.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:57.522 "adrfam": "ipv4", 00:10:57.522 "trsvcid": "$NVMF_PORT", 00:10:57.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:57.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:57.522 "hdgst": ${hdgst:-false}, 00:10:57.522 "ddgst": ${ddgst:-false} 00:10:57.522 }, 00:10:57.522 "method": "bdev_nvme_attach_controller" 00:10:57.522 } 00:10:57.522 EOF 00:10:57.522 )") 00:10:57.522 21:25:23 -- nvmf/common.sh@543 -- # cat 00:10:57.522 21:25:23 -- nvmf/common.sh@545 -- # jq . 00:10:57.522 21:25:23 -- nvmf/common.sh@546 -- # IFS=, 00:10:57.522 21:25:23 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:10:57.522 "params": { 00:10:57.522 "name": "Nvme0", 00:10:57.522 "trtype": "tcp", 00:10:57.522 "traddr": "10.0.0.2", 00:10:57.522 "adrfam": "ipv4", 00:10:57.522 "trsvcid": "4420", 00:10:57.522 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:57.522 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:57.522 "hdgst": false, 00:10:57.522 "ddgst": false 00:10:57.522 }, 00:10:57.522 "method": "bdev_nvme_attach_controller" 00:10:57.522 }' 00:10:57.522 [2024-04-24 21:25:23.116819] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:10:57.522 [2024-04-24 21:25:23.116895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2562250 ] 00:10:57.522 EAL: No free 2048 kB hugepages reported on node 1 00:10:57.522 [2024-04-24 21:25:23.177548] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.782 [2024-04-24 21:25:23.287270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.040 Running I/O for 10 seconds... 00:10:58.040 21:25:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:58.040 21:25:23 -- common/autotest_common.sh@850 -- # return 0 00:10:58.040 21:25:23 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:58.040 21:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.040 21:25:23 -- common/autotest_common.sh@10 -- # set +x 00:10:58.040 21:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.040 21:25:23 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:58.040 21:25:23 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:58.040 21:25:23 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:58.040 21:25:23 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:58.040 21:25:23 -- target/host_management.sh@52 -- # local ret=1 00:10:58.040 21:25:23 -- target/host_management.sh@53 -- # local i 00:10:58.040 21:25:23 -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:58.040 21:25:23 -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:58.040 21:25:23 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:58.040 21:25:23 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:58.040 21:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.040 21:25:23 -- common/autotest_common.sh@10 -- # set +x 00:10:58.040 21:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.040 21:25:23 -- target/host_management.sh@55 -- # read_io_count=3 00:10:58.040 21:25:23 -- target/host_management.sh@58 -- # '[' 3 -ge 100 ']' 00:10:58.040 21:25:23 -- target/host_management.sh@62 -- # sleep 0.25 00:10:58.300 21:25:23 -- target/host_management.sh@54 -- # (( i-- )) 00:10:58.300 21:25:23 -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:58.300 21:25:23 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:58.300 21:25:23 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:58.300 21:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.300 21:25:23 -- common/autotest_common.sh@10 -- # set +x 00:10:58.300 21:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.300 21:25:23 -- target/host_management.sh@55 -- # read_io_count=386 00:10:58.300 21:25:23 -- target/host_management.sh@58 -- # '[' 386 -ge 100 ']' 00:10:58.300 21:25:23 -- target/host_management.sh@59 -- # ret=0 00:10:58.300 21:25:23 -- target/host_management.sh@60 -- # break 00:10:58.300 21:25:23 -- target/host_management.sh@64 -- # return 0 00:10:58.300 21:25:23 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:58.300 21:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.300 21:25:23 -- common/autotest_common.sh@10 -- # set +x 00:10:58.300 [2024-04-24 21:25:23.898449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898552] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898568] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898581] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898593] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898659] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898682] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898790] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898827] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.300 [2024-04-24 21:25:23.898890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.898904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.898927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.898941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.898955] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.898967] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.898980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.899001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.899015] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.899028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.899041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.899054] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.899067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.899081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.899094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.899107] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.899120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.899134] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.899147] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.899161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.899174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.899188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.899201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.899215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.899229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.899242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc9ec0 is same with the state(5) to be set 00:10:58.301 21:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.301 21:25:23 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:58.301 21:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.301 21:25:23 -- common/autotest_common.sh@10 -- # set +x 00:10:58.301 21:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.301 21:25:23 -- target/host_management.sh@87 -- # sleep 1 00:10:58.301 [2024-04-24 21:25:23.916332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.301 [2024-04-24 21:25:23.916376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.916395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.301 [2024-04-24 21:25:23.916408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.916421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.301 [2024-04-24 21:25:23.916441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.916456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.301 [2024-04-24 21:25:23.916469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.916482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d5160 is same with the state(5) to be set 00:10:58.301 [2024-04-24 21:25:23.916587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.301 [2024-04-24 21:25:23.916608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.916641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.301 [2024-04-24 21:25:23.916658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.916687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.301 [2024-04-24 21:25:23.916701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.916716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.301 [2024-04-24 21:25:23.916730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.916745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.301 [2024-04-24 21:25:23.916758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.916773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.301 [2024-04-24 21:25:23.916786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.916802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.301 [2024-04-24 21:25:23.916815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.916830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.301 [2024-04-24 21:25:23.916843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.916858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.301 [2024-04-24 21:25:23.916872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.916887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.301 [2024-04-24 21:25:23.916901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.916916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.301 [2024-04-24 21:25:23.916955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.916972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.301 [2024-04-24 21:25:23.916986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.917000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.301 [2024-04-24 21:25:23.917013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.917028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.301 [2024-04-24 21:25:23.917042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.917057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.301 [2024-04-24 21:25:23.917071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.917086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.301 [2024-04-24 21:25:23.917100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.917114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.301 [2024-04-24 21:25:23.917128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.917144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.301 [2024-04-24 21:25:23.917158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.917173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.301 [2024-04-24 21:25:23.917186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.917200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.301 [2024-04-24 21:25:23.917214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.917228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.301 [2024-04-24 21:25:23.917242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.301 [2024-04-24 21:25:23.917256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.301 [2024-04-24 21:25:23.917269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.917972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.917987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.918000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.918014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.918028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.918042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.918055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.918073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.918088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.918103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.918117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.918133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.918147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.918161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.918174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.918189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.918203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.918217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.918230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.918245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.918258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.918273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.918286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.918301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.918313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.918328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.918341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.918356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.918369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.918383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.918396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.918411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.302 [2024-04-24 21:25:23.918428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.302 [2024-04-24 21:25:23.918444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.303 [2024-04-24 21:25:23.918457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.303 [2024-04-24 21:25:23.918471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.303 [2024-04-24 21:25:23.918485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.303 [2024-04-24 21:25:23.918500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.303 [2024-04-24 21:25:23.918513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.303 [2024-04-24 21:25:23.918598] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2605db0 was disconnected and freed. reset controller. 00:10:58.303 [2024-04-24 21:25:23.919748] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:10:58.303 task offset: 57088 on job bdev=Nvme0n1 fails 00:10:58.303 00:10:58.303 Latency(us) 00:10:58.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:58.303 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:58.303 Job: Nvme0n1 ended in about 0.40 seconds with error 00:10:58.303 Verification LBA range: start 0x0 length 0x400 00:10:58.303 Nvme0n1 : 0.40 1107.87 69.24 158.98 0.00 49178.94 2633.58 41360.50 00:10:58.303 =================================================================================================================== 00:10:58.303 Total : 1107.87 69.24 158.98 0.00 49178.94 2633.58 41360.50 00:10:58.303 [2024-04-24 21:25:23.921595] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:58.303 [2024-04-24 21:25:23.921642] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d5160 (9): Bad file descriptor 00:10:58.560 [2024-04-24 21:25:23.977849] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:59.494 21:25:24 -- target/host_management.sh@91 -- # kill -9 2562250 00:10:59.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2562250) - No such process 00:10:59.494 21:25:24 -- target/host_management.sh@91 -- # true 00:10:59.494 21:25:24 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:59.494 21:25:24 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:59.494 21:25:24 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:59.494 21:25:24 -- nvmf/common.sh@521 -- # config=() 00:10:59.494 21:25:24 -- nvmf/common.sh@521 -- # local subsystem config 00:10:59.494 21:25:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:10:59.494 21:25:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:10:59.494 { 00:10:59.494 "params": { 00:10:59.494 "name": "Nvme$subsystem", 00:10:59.494 "trtype": "$TEST_TRANSPORT", 00:10:59.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:59.494 "adrfam": "ipv4", 00:10:59.494 "trsvcid": "$NVMF_PORT", 00:10:59.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:59.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:59.494 "hdgst": ${hdgst:-false}, 00:10:59.494 "ddgst": ${ddgst:-false} 00:10:59.494 }, 00:10:59.494 "method": "bdev_nvme_attach_controller" 00:10:59.494 } 00:10:59.494 EOF 00:10:59.494 )") 00:10:59.494 21:25:24 -- nvmf/common.sh@543 -- # cat 00:10:59.494 21:25:24 -- nvmf/common.sh@545 -- # jq . 00:10:59.494 21:25:24 -- nvmf/common.sh@546 -- # IFS=, 00:10:59.494 21:25:24 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:10:59.494 "params": { 00:10:59.494 "name": "Nvme0", 00:10:59.494 "trtype": "tcp", 00:10:59.494 "traddr": "10.0.0.2", 00:10:59.494 "adrfam": "ipv4", 00:10:59.494 "trsvcid": "4420", 00:10:59.494 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:59.494 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:59.494 "hdgst": false, 00:10:59.494 "ddgst": false 00:10:59.494 }, 00:10:59.494 "method": "bdev_nvme_attach_controller" 00:10:59.494 }' 00:10:59.494 [2024-04-24 21:25:24.958257] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:10:59.494 [2024-04-24 21:25:24.958357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2562527 ] 00:10:59.494 EAL: No free 2048 kB hugepages reported on node 1 00:10:59.494 [2024-04-24 21:25:25.019116] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.494 [2024-04-24 21:25:25.128779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.062 Running I/O for 1 seconds... 00:11:00.995 00:11:00.995 Latency(us) 00:11:00.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.995 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:00.995 Verification LBA range: start 0x0 length 0x400 00:11:00.995 Nvme0n1 : 1.01 1210.44 75.65 0.00 0.00 52114.33 12184.84 43302.31 00:11:00.995 =================================================================================================================== 00:11:00.995 Total : 1210.44 75.65 0.00 0.00 52114.33 12184.84 43302.31 00:11:01.255 21:25:26 -- target/host_management.sh@102 -- # stoptarget 00:11:01.255 21:25:26 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:01.255 21:25:26 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:01.255 21:25:26 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:01.255 21:25:26 -- target/host_management.sh@40 -- # nvmftestfini 00:11:01.255 21:25:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:01.255 21:25:26 -- nvmf/common.sh@117 -- # sync 00:11:01.255 21:25:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:01.255 21:25:26 -- nvmf/common.sh@120 -- # set +e 00:11:01.255 21:25:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:01.255 21:25:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:01.255 rmmod nvme_tcp 00:11:01.255 rmmod nvme_fabrics 00:11:01.255 rmmod nvme_keyring 00:11:01.255 21:25:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:01.255 21:25:26 -- nvmf/common.sh@124 -- # set -e 00:11:01.255 21:25:26 -- nvmf/common.sh@125 -- # return 0 00:11:01.255 21:25:26 -- nvmf/common.sh@478 -- # '[' -n 2562076 ']' 00:11:01.255 21:25:26 -- nvmf/common.sh@479 -- # killprocess 2562076 00:11:01.255 21:25:26 -- common/autotest_common.sh@936 -- # '[' -z 2562076 ']' 00:11:01.255 21:25:26 -- common/autotest_common.sh@940 -- # kill -0 2562076 00:11:01.255 21:25:26 -- common/autotest_common.sh@941 -- # uname 00:11:01.255 21:25:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:01.255 21:25:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2562076 00:11:01.255 21:25:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:01.255 21:25:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:01.255 21:25:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2562076' 00:11:01.255 killing process with pid 2562076 00:11:01.255 21:25:26 -- common/autotest_common.sh@955 -- # kill 2562076 00:11:01.255 21:25:26 -- common/autotest_common.sh@960 -- # wait 2562076 00:11:01.514 [2024-04-24 21:25:27.120371] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:01.514 21:25:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:01.514 21:25:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:01.514 21:25:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:01.514 21:25:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:01.514 21:25:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:01.514 21:25:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.514 21:25:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:01.514 21:25:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.047 21:25:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:04.047 00:11:04.047 real 0m7.209s 00:11:04.047 user 0m21.927s 00:11:04.047 sys 0m1.188s 00:11:04.047 21:25:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:04.047 21:25:29 -- common/autotest_common.sh@10 -- # set +x 00:11:04.047 ************************************ 00:11:04.047 END TEST nvmf_host_management 00:11:04.047 ************************************ 00:11:04.047 21:25:29 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:04.047 00:11:04.047 real 0m9.524s 00:11:04.047 user 0m22.734s 00:11:04.047 sys 0m2.714s 00:11:04.047 21:25:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:04.047 21:25:29 -- common/autotest_common.sh@10 -- # set +x 00:11:04.047 ************************************ 00:11:04.047 END TEST nvmf_host_management 00:11:04.047 ************************************ 00:11:04.047 21:25:29 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:04.047 21:25:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:04.047 21:25:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:04.047 21:25:29 -- common/autotest_common.sh@10 -- # set +x 00:11:04.047 ************************************ 00:11:04.047 START TEST nvmf_lvol 00:11:04.047 ************************************ 00:11:04.047 21:25:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:04.047 * Looking for test storage... 00:11:04.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:04.047 21:25:29 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:04.047 21:25:29 -- nvmf/common.sh@7 -- # uname -s 00:11:04.047 21:25:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.047 21:25:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.047 21:25:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.047 21:25:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.047 21:25:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.047 21:25:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.047 21:25:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.047 21:25:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.047 21:25:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.047 21:25:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.047 21:25:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:04.047 21:25:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:04.047 21:25:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.047 21:25:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.047 21:25:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:04.047 21:25:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.047 21:25:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:04.047 21:25:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.047 21:25:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.047 21:25:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.047 21:25:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.047 21:25:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.047 21:25:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.047 21:25:29 -- paths/export.sh@5 -- # export PATH 00:11:04.047 21:25:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.047 21:25:29 -- nvmf/common.sh@47 -- # : 0 00:11:04.047 21:25:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:04.047 21:25:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:04.047 21:25:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.047 21:25:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.047 21:25:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.047 21:25:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:04.047 21:25:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:04.047 21:25:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:04.047 21:25:29 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:04.047 21:25:29 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:04.047 21:25:29 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:04.047 21:25:29 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:04.047 21:25:29 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:04.047 21:25:29 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:04.047 21:25:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:04.047 21:25:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.047 21:25:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:04.047 21:25:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:04.047 21:25:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:04.047 21:25:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.047 21:25:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:04.047 21:25:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.047 21:25:29 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:04.047 21:25:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:04.047 21:25:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:04.047 21:25:29 -- common/autotest_common.sh@10 -- # set +x 00:11:05.986 21:25:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:05.986 21:25:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:05.986 21:25:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:05.986 21:25:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:05.986 21:25:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:05.986 21:25:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:05.986 21:25:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:05.986 21:25:31 -- nvmf/common.sh@295 -- # net_devs=() 00:11:05.986 21:25:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:05.986 21:25:31 -- nvmf/common.sh@296 -- # e810=() 00:11:05.986 21:25:31 -- nvmf/common.sh@296 -- # local -ga e810 00:11:05.986 21:25:31 -- nvmf/common.sh@297 -- # x722=() 00:11:05.986 21:25:31 -- nvmf/common.sh@297 -- # local -ga x722 00:11:05.986 21:25:31 -- nvmf/common.sh@298 -- # mlx=() 00:11:05.986 21:25:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:05.986 21:25:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:05.986 21:25:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:05.986 21:25:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:05.986 21:25:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:05.986 21:25:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:05.986 21:25:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:05.986 21:25:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:05.986 21:25:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:05.986 21:25:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:05.986 21:25:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:05.986 21:25:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:05.986 21:25:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:05.986 21:25:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:05.986 21:25:31 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:05.986 21:25:31 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:05.986 21:25:31 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:05.986 21:25:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:05.986 21:25:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:05.986 21:25:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:05.986 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:05.986 21:25:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:05.986 21:25:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:05.986 21:25:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.986 21:25:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.986 21:25:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:05.986 21:25:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:05.986 21:25:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:05.986 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:05.986 21:25:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:05.986 21:25:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:05.986 21:25:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.986 21:25:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.986 21:25:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:05.986 21:25:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:05.986 21:25:31 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:05.986 21:25:31 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:05.986 21:25:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:05.986 21:25:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.986 21:25:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:05.986 21:25:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.986 21:25:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:05.986 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:05.986 21:25:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.986 21:25:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:05.986 21:25:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.986 21:25:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:05.986 21:25:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.986 21:25:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:05.986 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:05.986 21:25:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.986 21:25:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:05.986 21:25:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:05.986 21:25:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:05.986 21:25:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:05.986 21:25:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:05.986 21:25:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.986 21:25:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.986 21:25:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:05.986 21:25:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:05.986 21:25:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:05.986 21:25:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:05.986 21:25:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:05.986 21:25:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:05.986 21:25:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.986 21:25:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:05.986 21:25:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:05.986 21:25:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:05.986 21:25:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:05.986 21:25:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:05.986 21:25:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:05.986 21:25:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:05.986 21:25:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:05.986 21:25:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:05.986 21:25:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:05.986 21:25:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:05.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:11:05.986 00:11:05.986 --- 10.0.0.2 ping statistics --- 00:11:05.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.986 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:11:05.986 21:25:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:05.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:11:05.986 00:11:05.986 --- 10.0.0.1 ping statistics --- 00:11:05.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.986 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:11:05.986 21:25:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.986 21:25:31 -- nvmf/common.sh@411 -- # return 0 00:11:05.986 21:25:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:05.986 21:25:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.986 21:25:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:05.986 21:25:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:05.986 21:25:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.986 21:25:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:05.986 21:25:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:05.986 21:25:31 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:05.986 21:25:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:05.986 21:25:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:05.986 21:25:31 -- common/autotest_common.sh@10 -- # set +x 00:11:05.986 21:25:31 -- nvmf/common.sh@470 -- # nvmfpid=2564753 00:11:05.986 21:25:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:05.986 21:25:31 -- nvmf/common.sh@471 -- # waitforlisten 2564753 00:11:05.986 21:25:31 -- common/autotest_common.sh@817 -- # '[' -z 2564753 ']' 00:11:05.986 21:25:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.986 21:25:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:05.987 21:25:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.987 21:25:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:05.987 21:25:31 -- common/autotest_common.sh@10 -- # set +x 00:11:05.987 [2024-04-24 21:25:31.578555] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:11:05.987 [2024-04-24 21:25:31.578664] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.987 EAL: No free 2048 kB hugepages reported on node 1 00:11:05.987 [2024-04-24 21:25:31.644953] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:06.245 [2024-04-24 21:25:31.760538] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.245 [2024-04-24 21:25:31.760592] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.245 [2024-04-24 21:25:31.760617] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.245 [2024-04-24 21:25:31.760637] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.245 [2024-04-24 21:25:31.760664] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.245 [2024-04-24 21:25:31.760740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.245 [2024-04-24 21:25:31.760768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.245 [2024-04-24 21:25:31.760771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.179 21:25:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:07.179 21:25:32 -- common/autotest_common.sh@850 -- # return 0 00:11:07.179 21:25:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:07.179 21:25:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:07.179 21:25:32 -- common/autotest_common.sh@10 -- # set +x 00:11:07.179 21:25:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.179 21:25:32 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:07.179 [2024-04-24 21:25:32.734821] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.179 21:25:32 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:07.438 21:25:33 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:07.438 21:25:33 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:07.697 21:25:33 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:07.697 21:25:33 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:07.955 21:25:33 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:08.213 21:25:33 -- target/nvmf_lvol.sh@29 -- # lvs=bbac9bcd-8330-4c38-b74e-4a0742b5fa0c 00:11:08.213 21:25:33 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bbac9bcd-8330-4c38-b74e-4a0742b5fa0c lvol 20 00:11:08.470 21:25:34 -- target/nvmf_lvol.sh@32 -- # lvol=fb24c0cf-2be4-44c8-a5f2-3f0d0580feac 00:11:08.470 21:25:34 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:08.728 21:25:34 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fb24c0cf-2be4-44c8-a5f2-3f0d0580feac 00:11:08.986 21:25:34 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:09.243 [2024-04-24 21:25:34.755887] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.243 21:25:34 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:09.501 21:25:35 -- target/nvmf_lvol.sh@42 -- # perf_pid=2565186 00:11:09.501 21:25:35 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:09.501 21:25:35 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:09.501 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.436 21:25:36 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot fb24c0cf-2be4-44c8-a5f2-3f0d0580feac MY_SNAPSHOT 00:11:10.693 21:25:36 -- target/nvmf_lvol.sh@47 -- # snapshot=7dd8cab4-7e2c-41f1-9fc5-2ed5441bcf80 00:11:10.693 21:25:36 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize fb24c0cf-2be4-44c8-a5f2-3f0d0580feac 30 00:11:10.952 21:25:36 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7dd8cab4-7e2c-41f1-9fc5-2ed5441bcf80 MY_CLONE 00:11:11.209 21:25:36 -- target/nvmf_lvol.sh@49 -- # clone=3f0ee29d-3fa6-417e-b7d6-2a4f77721a24 00:11:11.209 21:25:36 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3f0ee29d-3fa6-417e-b7d6-2a4f77721a24 00:11:11.775 21:25:37 -- target/nvmf_lvol.sh@53 -- # wait 2565186 00:11:19.884 Initializing NVMe Controllers 00:11:19.884 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:19.884 Controller IO queue size 128, less than required. 00:11:19.884 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:19.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:19.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:19.884 Initialization complete. Launching workers. 00:11:19.884 ======================================================== 00:11:19.884 Latency(us) 00:11:19.884 Device Information : IOPS MiB/s Average min max 00:11:19.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10152.10 39.66 12613.54 1940.31 84146.73 00:11:19.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10566.10 41.27 12119.10 2275.58 79021.28 00:11:19.884 ======================================================== 00:11:19.884 Total : 20718.20 80.93 12361.38 1940.31 84146.73 00:11:19.884 00:11:19.884 21:25:45 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:20.140 21:25:45 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fb24c0cf-2be4-44c8-a5f2-3f0d0580feac 00:11:20.398 21:25:45 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bbac9bcd-8330-4c38-b74e-4a0742b5fa0c 00:11:20.655 21:25:46 -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:20.655 21:25:46 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:20.655 21:25:46 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:20.655 21:25:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:20.655 21:25:46 -- nvmf/common.sh@117 -- # sync 00:11:20.655 21:25:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:20.655 21:25:46 -- nvmf/common.sh@120 -- # set +e 00:11:20.655 21:25:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:20.655 21:25:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:20.655 rmmod nvme_tcp 00:11:20.655 rmmod nvme_fabrics 00:11:20.655 rmmod nvme_keyring 00:11:20.655 21:25:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:20.655 21:25:46 -- nvmf/common.sh@124 -- # set -e 00:11:20.655 21:25:46 -- nvmf/common.sh@125 -- # return 0 00:11:20.655 21:25:46 -- nvmf/common.sh@478 -- # '[' -n 2564753 ']' 00:11:20.655 21:25:46 -- nvmf/common.sh@479 -- # killprocess 2564753 00:11:20.655 21:25:46 -- common/autotest_common.sh@936 -- # '[' -z 2564753 ']' 00:11:20.655 21:25:46 -- common/autotest_common.sh@940 -- # kill -0 2564753 00:11:20.655 21:25:46 -- common/autotest_common.sh@941 -- # uname 00:11:20.655 21:25:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:20.655 21:25:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2564753 00:11:20.655 21:25:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:20.655 21:25:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:20.655 21:25:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2564753' 00:11:20.655 killing process with pid 2564753 00:11:20.655 21:25:46 -- common/autotest_common.sh@955 -- # kill 2564753 00:11:20.655 21:25:46 -- common/autotest_common.sh@960 -- # wait 2564753 00:11:21.222 21:25:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:21.222 21:25:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:21.222 21:25:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:21.222 21:25:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:21.222 21:25:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:21.222 21:25:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.222 21:25:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.222 21:25:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.125 21:25:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:23.125 00:11:23.125 real 0m19.372s 00:11:23.125 user 1m5.515s 00:11:23.125 sys 0m5.824s 00:11:23.125 21:25:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:23.125 21:25:48 -- common/autotest_common.sh@10 -- # set +x 00:11:23.125 ************************************ 00:11:23.125 END TEST nvmf_lvol 00:11:23.125 ************************************ 00:11:23.125 21:25:48 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:23.125 21:25:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:23.125 21:25:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:23.125 21:25:48 -- common/autotest_common.sh@10 -- # set +x 00:11:23.383 ************************************ 00:11:23.383 START TEST nvmf_lvs_grow 00:11:23.383 ************************************ 00:11:23.383 21:25:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:23.383 * Looking for test storage... 00:11:23.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.383 21:25:48 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.383 21:25:48 -- nvmf/common.sh@7 -- # uname -s 00:11:23.384 21:25:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.384 21:25:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.384 21:25:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.384 21:25:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.384 21:25:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.384 21:25:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.384 21:25:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.384 21:25:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.384 21:25:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.384 21:25:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.384 21:25:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:23.384 21:25:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:23.384 21:25:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.384 21:25:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.384 21:25:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.384 21:25:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.384 21:25:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:23.384 21:25:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.384 21:25:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.384 21:25:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.384 21:25:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.384 21:25:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.384 21:25:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.384 21:25:48 -- paths/export.sh@5 -- # export PATH 00:11:23.384 21:25:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.384 21:25:48 -- nvmf/common.sh@47 -- # : 0 00:11:23.384 21:25:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:23.384 21:25:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:23.384 21:25:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.384 21:25:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.384 21:25:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.384 21:25:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:23.384 21:25:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:23.384 21:25:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:23.384 21:25:48 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:23.384 21:25:48 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:23.384 21:25:48 -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:23.384 21:25:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:23.384 21:25:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.384 21:25:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:23.384 21:25:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:23.384 21:25:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:23.384 21:25:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.384 21:25:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:23.384 21:25:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.384 21:25:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:23.384 21:25:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:23.384 21:25:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:23.384 21:25:48 -- common/autotest_common.sh@10 -- # set +x 00:11:25.320 21:25:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:25.320 21:25:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:25.320 21:25:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:25.320 21:25:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:25.320 21:25:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:25.320 21:25:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:25.320 21:25:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:25.320 21:25:50 -- nvmf/common.sh@295 -- # net_devs=() 00:11:25.320 21:25:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:25.320 21:25:50 -- nvmf/common.sh@296 -- # e810=() 00:11:25.320 21:25:50 -- nvmf/common.sh@296 -- # local -ga e810 00:11:25.320 21:25:50 -- nvmf/common.sh@297 -- # x722=() 00:11:25.320 21:25:50 -- nvmf/common.sh@297 -- # local -ga x722 00:11:25.320 21:25:50 -- nvmf/common.sh@298 -- # mlx=() 00:11:25.320 21:25:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:25.320 21:25:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.320 21:25:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.320 21:25:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.320 21:25:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.320 21:25:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.320 21:25:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.320 21:25:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.320 21:25:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.320 21:25:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.320 21:25:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.320 21:25:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.320 21:25:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:25.320 21:25:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:25.320 21:25:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:25.320 21:25:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:25.320 21:25:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:25.320 21:25:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:25.320 21:25:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.320 21:25:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:25.320 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:25.320 21:25:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:25.320 21:25:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:25.320 21:25:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.320 21:25:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.320 21:25:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:25.320 21:25:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.320 21:25:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:25.320 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:25.320 21:25:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:25.320 21:25:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:25.320 21:25:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.320 21:25:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.320 21:25:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:25.320 21:25:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:25.320 21:25:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:25.320 21:25:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:25.320 21:25:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.320 21:25:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.320 21:25:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:25.320 21:25:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.320 21:25:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:25.320 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:25.320 21:25:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.320 21:25:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.320 21:25:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.320 21:25:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:25.320 21:25:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.320 21:25:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:25.320 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:25.320 21:25:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.320 21:25:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:25.320 21:25:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:25.320 21:25:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:25.320 21:25:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:25.320 21:25:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:25.320 21:25:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.320 21:25:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:25.320 21:25:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:25.320 21:25:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:25.320 21:25:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:25.320 21:25:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:25.320 21:25:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:25.320 21:25:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:25.320 21:25:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.320 21:25:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:25.320 21:25:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:25.320 21:25:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:25.320 21:25:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:25.320 21:25:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:25.320 21:25:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:25.320 21:25:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:25.320 21:25:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:25.320 21:25:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:25.320 21:25:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:25.578 21:25:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:25.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:11:25.578 00:11:25.578 --- 10.0.0.2 ping statistics --- 00:11:25.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.579 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:11:25.579 21:25:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:25.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:11:25.579 00:11:25.579 --- 10.0.0.1 ping statistics --- 00:11:25.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.579 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:11:25.579 21:25:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.579 21:25:51 -- nvmf/common.sh@411 -- # return 0 00:11:25.579 21:25:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:25.579 21:25:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.579 21:25:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:25.579 21:25:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:25.579 21:25:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.579 21:25:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:25.579 21:25:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:25.579 21:25:51 -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:25.579 21:25:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:25.579 21:25:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:25.579 21:25:51 -- common/autotest_common.sh@10 -- # set +x 00:11:25.579 21:25:51 -- nvmf/common.sh@470 -- # nvmfpid=2568458 00:11:25.579 21:25:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:25.579 21:25:51 -- nvmf/common.sh@471 -- # waitforlisten 2568458 00:11:25.579 21:25:51 -- common/autotest_common.sh@817 -- # '[' -z 2568458 ']' 00:11:25.579 21:25:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.579 21:25:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:25.579 21:25:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.579 21:25:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:25.579 21:25:51 -- common/autotest_common.sh@10 -- # set +x 00:11:25.579 [2024-04-24 21:25:51.079776] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:11:25.579 [2024-04-24 21:25:51.079853] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.579 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.579 [2024-04-24 21:25:51.149322] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.837 [2024-04-24 21:25:51.264391] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.837 [2024-04-24 21:25:51.264446] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.837 [2024-04-24 21:25:51.264471] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.837 [2024-04-24 21:25:51.264484] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.837 [2024-04-24 21:25:51.264497] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.837 [2024-04-24 21:25:51.264536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.402 21:25:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:26.402 21:25:52 -- common/autotest_common.sh@850 -- # return 0 00:11:26.402 21:25:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:26.402 21:25:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:26.402 21:25:52 -- common/autotest_common.sh@10 -- # set +x 00:11:26.402 21:25:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.402 21:25:52 -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:26.661 [2024-04-24 21:25:52.308732] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.661 21:25:52 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:26.661 21:25:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:26.661 21:25:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:26.661 21:25:52 -- common/autotest_common.sh@10 -- # set +x 00:11:26.918 ************************************ 00:11:26.918 START TEST lvs_grow_clean 00:11:26.918 ************************************ 00:11:26.918 21:25:52 -- common/autotest_common.sh@1111 -- # lvs_grow 00:11:26.918 21:25:52 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:26.918 21:25:52 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:26.918 21:25:52 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:26.918 21:25:52 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:26.918 21:25:52 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:26.918 21:25:52 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:26.918 21:25:52 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:26.918 21:25:52 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:26.918 21:25:52 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:27.176 21:25:52 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:27.176 21:25:52 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:27.433 21:25:53 -- target/nvmf_lvs_grow.sh@28 -- # lvs=1cebcb6f-6894-4d75-a91d-8d3553a86993 00:11:27.433 21:25:53 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cebcb6f-6894-4d75-a91d-8d3553a86993 00:11:27.433 21:25:53 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:27.691 21:25:53 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:27.691 21:25:53 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:27.691 21:25:53 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1cebcb6f-6894-4d75-a91d-8d3553a86993 lvol 150 00:11:27.949 21:25:53 -- target/nvmf_lvs_grow.sh@33 -- # lvol=9c46ea8d-f70c-4cc3-9589-4556472cb48d 00:11:27.949 21:25:53 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:27.949 21:25:53 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:28.207 [2024-04-24 21:25:53.720797] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:28.207 [2024-04-24 21:25:53.720877] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:28.207 true 00:11:28.207 21:25:53 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cebcb6f-6894-4d75-a91d-8d3553a86993 00:11:28.207 21:25:53 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:28.465 21:25:53 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:28.465 21:25:53 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:28.723 21:25:54 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9c46ea8d-f70c-4cc3-9589-4556472cb48d 00:11:28.980 21:25:54 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:29.238 [2024-04-24 21:25:54.800098] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.238 21:25:54 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:29.496 21:25:55 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2569033 00:11:29.496 21:25:55 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:29.496 21:25:55 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:29.496 21:25:55 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2569033 /var/tmp/bdevperf.sock 00:11:29.496 21:25:55 -- common/autotest_common.sh@817 -- # '[' -z 2569033 ']' 00:11:29.496 21:25:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:29.496 21:25:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:29.496 21:25:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:29.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:29.496 21:25:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:29.496 21:25:55 -- common/autotest_common.sh@10 -- # set +x 00:11:29.496 [2024-04-24 21:25:55.146153] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:11:29.496 [2024-04-24 21:25:55.146225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2569033 ] 00:11:29.496 EAL: No free 2048 kB hugepages reported on node 1 00:11:29.755 [2024-04-24 21:25:55.207683] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.755 [2024-04-24 21:25:55.321851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.013 21:25:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:30.013 21:25:55 -- common/autotest_common.sh@850 -- # return 0 00:11:30.013 21:25:55 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:30.270 Nvme0n1 00:11:30.528 21:25:55 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:30.787 [ 00:11:30.787 { 00:11:30.787 "name": "Nvme0n1", 00:11:30.787 "aliases": [ 00:11:30.787 "9c46ea8d-f70c-4cc3-9589-4556472cb48d" 00:11:30.787 ], 00:11:30.787 "product_name": "NVMe disk", 00:11:30.787 "block_size": 4096, 00:11:30.787 "num_blocks": 38912, 00:11:30.787 "uuid": "9c46ea8d-f70c-4cc3-9589-4556472cb48d", 00:11:30.787 "assigned_rate_limits": { 00:11:30.787 "rw_ios_per_sec": 0, 00:11:30.787 "rw_mbytes_per_sec": 0, 00:11:30.787 "r_mbytes_per_sec": 0, 00:11:30.787 "w_mbytes_per_sec": 0 00:11:30.787 }, 00:11:30.787 "claimed": false, 00:11:30.787 "zoned": false, 00:11:30.787 "supported_io_types": { 00:11:30.787 "read": true, 00:11:30.787 "write": true, 00:11:30.787 "unmap": true, 00:11:30.787 "write_zeroes": true, 00:11:30.787 "flush": true, 00:11:30.787 "reset": true, 00:11:30.787 "compare": true, 00:11:30.787 "compare_and_write": true, 00:11:30.787 "abort": true, 00:11:30.787 "nvme_admin": true, 00:11:30.787 "nvme_io": true 00:11:30.787 }, 00:11:30.787 "memory_domains": [ 00:11:30.787 { 00:11:30.787 "dma_device_id": "system", 00:11:30.787 "dma_device_type": 1 00:11:30.787 } 00:11:30.787 ], 00:11:30.787 "driver_specific": { 00:11:30.787 "nvme": [ 00:11:30.787 { 00:11:30.787 "trid": { 00:11:30.787 "trtype": "TCP", 00:11:30.787 "adrfam": "IPv4", 00:11:30.787 "traddr": "10.0.0.2", 00:11:30.787 "trsvcid": "4420", 00:11:30.787 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:30.787 }, 00:11:30.787 "ctrlr_data": { 00:11:30.787 "cntlid": 1, 00:11:30.787 "vendor_id": "0x8086", 00:11:30.787 "model_number": "SPDK bdev Controller", 00:11:30.787 "serial_number": "SPDK0", 00:11:30.787 "firmware_revision": "24.05", 00:11:30.787 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:30.787 "oacs": { 00:11:30.787 "security": 0, 00:11:30.787 "format": 0, 00:11:30.787 "firmware": 0, 00:11:30.787 "ns_manage": 0 00:11:30.787 }, 00:11:30.787 "multi_ctrlr": true, 00:11:30.787 "ana_reporting": false 00:11:30.787 }, 00:11:30.787 "vs": { 00:11:30.787 "nvme_version": "1.3" 00:11:30.787 }, 00:11:30.787 "ns_data": { 00:11:30.787 "id": 1, 00:11:30.787 "can_share": true 00:11:30.787 } 00:11:30.787 } 00:11:30.787 ], 00:11:30.787 "mp_policy": "active_passive" 00:11:30.787 } 00:11:30.787 } 00:11:30.787 ] 00:11:30.787 21:25:56 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2569168 00:11:30.787 21:25:56 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:30.787 21:25:56 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:30.787 Running I/O for 10 seconds... 00:11:31.722 Latency(us) 00:11:31.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.722 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:31.722 Nvme0n1 : 1.00 13960.00 54.53 0.00 0.00 0.00 0.00 0.00 00:11:31.722 =================================================================================================================== 00:11:31.722 Total : 13960.00 54.53 0.00 0.00 0.00 0.00 0.00 00:11:31.722 00:11:32.656 21:25:58 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1cebcb6f-6894-4d75-a91d-8d3553a86993 00:11:32.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:32.914 Nvme0n1 : 2.00 14115.50 55.14 0.00 0.00 0.00 0.00 0.00 00:11:32.914 =================================================================================================================== 00:11:32.914 Total : 14115.50 55.14 0.00 0.00 0.00 0.00 0.00 00:11:32.914 00:11:32.914 true 00:11:32.914 21:25:58 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cebcb6f-6894-4d75-a91d-8d3553a86993 00:11:32.914 21:25:58 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:33.172 21:25:58 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:33.172 21:25:58 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:33.172 21:25:58 -- target/nvmf_lvs_grow.sh@65 -- # wait 2569168 00:11:33.738 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:33.738 Nvme0n1 : 3.00 14231.67 55.59 0.00 0.00 0.00 0.00 0.00 00:11:33.739 =================================================================================================================== 00:11:33.739 Total : 14231.67 55.59 0.00 0.00 0.00 0.00 0.00 00:11:33.739 00:11:35.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:35.113 Nvme0n1 : 4.00 14246.25 55.65 0.00 0.00 0.00 0.00 0.00 00:11:35.113 =================================================================================================================== 00:11:35.113 Total : 14246.25 55.65 0.00 0.00 0.00 0.00 0.00 00:11:35.113 00:11:35.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:35.679 Nvme0n1 : 5.00 14299.20 55.86 0.00 0.00 0.00 0.00 0.00 00:11:35.679 =================================================================================================================== 00:11:35.679 Total : 14299.20 55.86 0.00 0.00 0.00 0.00 0.00 00:11:35.679 00:11:37.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:37.055 Nvme0n1 : 6.00 14361.33 56.10 0.00 0.00 0.00 0.00 0.00 00:11:37.055 =================================================================================================================== 00:11:37.055 Total : 14361.33 56.10 0.00 0.00 0.00 0.00 0.00 00:11:37.055 00:11:37.989 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:37.989 Nvme0n1 : 7.00 14382.86 56.18 0.00 0.00 0.00 0.00 0.00 00:11:37.989 =================================================================================================================== 00:11:37.989 Total : 14382.86 56.18 0.00 0.00 0.00 0.00 0.00 00:11:37.989 00:11:38.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:38.974 Nvme0n1 : 8.00 14432.88 56.38 0.00 0.00 0.00 0.00 0.00 00:11:38.974 =================================================================================================================== 00:11:38.974 Total : 14432.88 56.38 0.00 0.00 0.00 0.00 0.00 00:11:38.974 00:11:39.908 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:39.908 Nvme0n1 : 9.00 14450.44 56.45 0.00 0.00 0.00 0.00 0.00 00:11:39.908 =================================================================================================================== 00:11:39.908 Total : 14450.44 56.45 0.00 0.00 0.00 0.00 0.00 00:11:39.908 00:11:40.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.841 Nvme0n1 : 10.00 14464.60 56.50 0.00 0.00 0.00 0.00 0.00 00:11:40.841 =================================================================================================================== 00:11:40.841 Total : 14464.60 56.50 0.00 0.00 0.00 0.00 0.00 00:11:40.841 00:11:40.841 00:11:40.841 Latency(us) 00:11:40.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.841 Nvme0n1 : 10.01 14466.45 56.51 0.00 0.00 8842.66 5364.24 18447.17 00:11:40.841 =================================================================================================================== 00:11:40.841 Total : 14466.45 56.51 0.00 0.00 8842.66 5364.24 18447.17 00:11:40.841 0 00:11:40.841 21:26:06 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2569033 00:11:40.841 21:26:06 -- common/autotest_common.sh@936 -- # '[' -z 2569033 ']' 00:11:40.841 21:26:06 -- common/autotest_common.sh@940 -- # kill -0 2569033 00:11:40.841 21:26:06 -- common/autotest_common.sh@941 -- # uname 00:11:40.841 21:26:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:40.841 21:26:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2569033 00:11:40.841 21:26:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:40.841 21:26:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:40.841 21:26:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2569033' 00:11:40.841 killing process with pid 2569033 00:11:40.841 21:26:06 -- common/autotest_common.sh@955 -- # kill 2569033 00:11:40.841 Received shutdown signal, test time was about 10.000000 seconds 00:11:40.841 00:11:40.841 Latency(us) 00:11:40.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.841 =================================================================================================================== 00:11:40.841 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:40.841 21:26:06 -- common/autotest_common.sh@960 -- # wait 2569033 00:11:41.099 21:26:06 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:41.357 21:26:06 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:41.615 21:26:07 -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cebcb6f-6894-4d75-a91d-8d3553a86993 00:11:41.615 21:26:07 -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:41.874 21:26:07 -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:41.874 21:26:07 -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:41.874 21:26:07 -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:42.132 [2024-04-24 21:26:07.761402] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:42.132 21:26:07 -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cebcb6f-6894-4d75-a91d-8d3553a86993 00:11:42.132 21:26:07 -- common/autotest_common.sh@638 -- # local es=0 00:11:42.132 21:26:07 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cebcb6f-6894-4d75-a91d-8d3553a86993 00:11:42.132 21:26:07 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:42.132 21:26:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:42.132 21:26:07 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:42.132 21:26:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:42.132 21:26:07 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:42.132 21:26:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:42.132 21:26:07 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:42.132 21:26:07 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:42.132 21:26:07 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cebcb6f-6894-4d75-a91d-8d3553a86993 00:11:42.391 request: 00:11:42.391 { 00:11:42.391 "uuid": "1cebcb6f-6894-4d75-a91d-8d3553a86993", 00:11:42.391 "method": "bdev_lvol_get_lvstores", 00:11:42.391 "req_id": 1 00:11:42.391 } 00:11:42.391 Got JSON-RPC error response 00:11:42.391 response: 00:11:42.391 { 00:11:42.391 "code": -19, 00:11:42.391 "message": "No such device" 00:11:42.391 } 00:11:42.649 21:26:08 -- common/autotest_common.sh@641 -- # es=1 00:11:42.649 21:26:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:42.649 21:26:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:42.649 21:26:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:42.649 21:26:08 -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:42.649 aio_bdev 00:11:42.649 21:26:08 -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9c46ea8d-f70c-4cc3-9589-4556472cb48d 00:11:42.649 21:26:08 -- common/autotest_common.sh@885 -- # local bdev_name=9c46ea8d-f70c-4cc3-9589-4556472cb48d 00:11:42.649 21:26:08 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:42.649 21:26:08 -- common/autotest_common.sh@887 -- # local i 00:11:42.649 21:26:08 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:42.649 21:26:08 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:42.649 21:26:08 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:42.908 21:26:08 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9c46ea8d-f70c-4cc3-9589-4556472cb48d -t 2000 00:11:43.166 [ 00:11:43.166 { 00:11:43.166 "name": "9c46ea8d-f70c-4cc3-9589-4556472cb48d", 00:11:43.166 "aliases": [ 00:11:43.166 "lvs/lvol" 00:11:43.166 ], 00:11:43.166 "product_name": "Logical Volume", 00:11:43.166 "block_size": 4096, 00:11:43.166 "num_blocks": 38912, 00:11:43.166 "uuid": "9c46ea8d-f70c-4cc3-9589-4556472cb48d", 00:11:43.166 "assigned_rate_limits": { 00:11:43.166 "rw_ios_per_sec": 0, 00:11:43.166 "rw_mbytes_per_sec": 0, 00:11:43.166 "r_mbytes_per_sec": 0, 00:11:43.166 "w_mbytes_per_sec": 0 00:11:43.166 }, 00:11:43.166 "claimed": false, 00:11:43.166 "zoned": false, 00:11:43.166 "supported_io_types": { 00:11:43.166 "read": true, 00:11:43.166 "write": true, 00:11:43.166 "unmap": true, 00:11:43.166 "write_zeroes": true, 00:11:43.166 "flush": false, 00:11:43.166 "reset": true, 00:11:43.166 "compare": false, 00:11:43.166 "compare_and_write": false, 00:11:43.166 "abort": false, 00:11:43.166 "nvme_admin": false, 00:11:43.166 "nvme_io": false 00:11:43.166 }, 00:11:43.166 "driver_specific": { 00:11:43.166 "lvol": { 00:11:43.166 "lvol_store_uuid": "1cebcb6f-6894-4d75-a91d-8d3553a86993", 00:11:43.166 "base_bdev": "aio_bdev", 00:11:43.166 "thin_provision": false, 00:11:43.166 "snapshot": false, 00:11:43.166 "clone": false, 00:11:43.166 "esnap_clone": false 00:11:43.166 } 00:11:43.166 } 00:11:43.166 } 00:11:43.166 ] 00:11:43.166 21:26:08 -- common/autotest_common.sh@893 -- # return 0 00:11:43.166 21:26:08 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:43.166 21:26:08 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cebcb6f-6894-4d75-a91d-8d3553a86993 00:11:43.424 21:26:09 -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:43.424 21:26:09 -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cebcb6f-6894-4d75-a91d-8d3553a86993 00:11:43.424 21:26:09 -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:43.682 21:26:09 -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:43.682 21:26:09 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9c46ea8d-f70c-4cc3-9589-4556472cb48d 00:11:44.250 21:26:09 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1cebcb6f-6894-4d75-a91d-8d3553a86993 00:11:44.250 21:26:09 -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:44.508 21:26:10 -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:44.508 00:11:44.508 real 0m17.726s 00:11:44.508 user 0m17.260s 00:11:44.508 sys 0m1.856s 00:11:44.508 21:26:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:44.508 21:26:10 -- common/autotest_common.sh@10 -- # set +x 00:11:44.508 ************************************ 00:11:44.508 END TEST lvs_grow_clean 00:11:44.509 ************************************ 00:11:44.509 21:26:10 -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:44.509 21:26:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:44.509 21:26:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:44.509 21:26:10 -- common/autotest_common.sh@10 -- # set +x 00:11:44.766 ************************************ 00:11:44.766 START TEST lvs_grow_dirty 00:11:44.766 ************************************ 00:11:44.766 21:26:10 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:11:44.766 21:26:10 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:44.766 21:26:10 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:44.766 21:26:10 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:44.766 21:26:10 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:44.766 21:26:10 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:44.766 21:26:10 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:44.766 21:26:10 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:44.766 21:26:10 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:44.766 21:26:10 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:45.024 21:26:10 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:45.024 21:26:10 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:45.283 21:26:10 -- target/nvmf_lvs_grow.sh@28 -- # lvs=65ad3e5f-76d2-4bec-859d-dc0833278eac 00:11:45.283 21:26:10 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65ad3e5f-76d2-4bec-859d-dc0833278eac 00:11:45.283 21:26:10 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:45.541 21:26:11 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:45.541 21:26:11 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:45.541 21:26:11 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 65ad3e5f-76d2-4bec-859d-dc0833278eac lvol 150 00:11:45.799 21:26:11 -- target/nvmf_lvs_grow.sh@33 -- # lvol=8d348f04-1b06-4d04-9b43-45c37102f65a 00:11:45.799 21:26:11 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:45.799 21:26:11 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:46.057 [2024-04-24 21:26:11.559779] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:46.057 [2024-04-24 21:26:11.559873] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:46.057 true 00:11:46.057 21:26:11 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65ad3e5f-76d2-4bec-859d-dc0833278eac 00:11:46.057 21:26:11 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:46.315 21:26:11 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:46.315 21:26:11 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:46.575 21:26:12 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8d348f04-1b06-4d04-9b43-45c37102f65a 00:11:46.834 21:26:12 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:47.092 [2024-04-24 21:26:12.574893] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.092 21:26:12 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:47.350 21:26:12 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2571094 00:11:47.350 21:26:12 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:47.350 21:26:12 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:47.350 21:26:12 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2571094 /var/tmp/bdevperf.sock 00:11:47.350 21:26:12 -- common/autotest_common.sh@817 -- # '[' -z 2571094 ']' 00:11:47.351 21:26:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:47.351 21:26:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:47.351 21:26:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:47.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:47.351 21:26:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:47.351 21:26:12 -- common/autotest_common.sh@10 -- # set +x 00:11:47.351 [2024-04-24 21:26:12.878318] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:11:47.351 [2024-04-24 21:26:12.878393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2571094 ] 00:11:47.351 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.351 [2024-04-24 21:26:12.942532] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.609 [2024-04-24 21:26:13.060994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.609 21:26:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:47.609 21:26:13 -- common/autotest_common.sh@850 -- # return 0 00:11:47.609 21:26:13 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:48.174 Nvme0n1 00:11:48.174 21:26:13 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:48.432 [ 00:11:48.432 { 00:11:48.432 "name": "Nvme0n1", 00:11:48.432 "aliases": [ 00:11:48.432 "8d348f04-1b06-4d04-9b43-45c37102f65a" 00:11:48.432 ], 00:11:48.432 "product_name": "NVMe disk", 00:11:48.432 "block_size": 4096, 00:11:48.432 "num_blocks": 38912, 00:11:48.432 "uuid": "8d348f04-1b06-4d04-9b43-45c37102f65a", 00:11:48.432 "assigned_rate_limits": { 00:11:48.432 "rw_ios_per_sec": 0, 00:11:48.432 "rw_mbytes_per_sec": 0, 00:11:48.432 "r_mbytes_per_sec": 0, 00:11:48.432 "w_mbytes_per_sec": 0 00:11:48.432 }, 00:11:48.432 "claimed": false, 00:11:48.432 "zoned": false, 00:11:48.432 "supported_io_types": { 00:11:48.432 "read": true, 00:11:48.432 "write": true, 00:11:48.432 "unmap": true, 00:11:48.432 "write_zeroes": true, 00:11:48.432 "flush": true, 00:11:48.432 "reset": true, 00:11:48.432 "compare": true, 00:11:48.432 "compare_and_write": true, 00:11:48.432 "abort": true, 00:11:48.432 "nvme_admin": true, 00:11:48.432 "nvme_io": true 00:11:48.432 }, 00:11:48.432 "memory_domains": [ 00:11:48.432 { 00:11:48.432 "dma_device_id": "system", 00:11:48.432 "dma_device_type": 1 00:11:48.432 } 00:11:48.432 ], 00:11:48.432 "driver_specific": { 00:11:48.432 "nvme": [ 00:11:48.432 { 00:11:48.432 "trid": { 00:11:48.432 "trtype": "TCP", 00:11:48.432 "adrfam": "IPv4", 00:11:48.432 "traddr": "10.0.0.2", 00:11:48.432 "trsvcid": "4420", 00:11:48.432 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:48.432 }, 00:11:48.432 "ctrlr_data": { 00:11:48.432 "cntlid": 1, 00:11:48.432 "vendor_id": "0x8086", 00:11:48.432 "model_number": "SPDK bdev Controller", 00:11:48.432 "serial_number": "SPDK0", 00:11:48.432 "firmware_revision": "24.05", 00:11:48.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:48.432 "oacs": { 00:11:48.433 "security": 0, 00:11:48.433 "format": 0, 00:11:48.433 "firmware": 0, 00:11:48.433 "ns_manage": 0 00:11:48.433 }, 00:11:48.433 "multi_ctrlr": true, 00:11:48.433 "ana_reporting": false 00:11:48.433 }, 00:11:48.433 "vs": { 00:11:48.433 "nvme_version": "1.3" 00:11:48.433 }, 00:11:48.433 "ns_data": { 00:11:48.433 "id": 1, 00:11:48.433 "can_share": true 00:11:48.433 } 00:11:48.433 } 00:11:48.433 ], 00:11:48.433 "mp_policy": "active_passive" 00:11:48.433 } 00:11:48.433 } 00:11:48.433 ] 00:11:48.433 21:26:13 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2571229 00:11:48.433 21:26:13 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:48.433 21:26:13 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:48.433 Running I/O for 10 seconds... 00:11:49.367 Latency(us) 00:11:49.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:49.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:49.367 Nvme0n1 : 1.00 13155.00 51.39 0.00 0.00 0.00 0.00 0.00 00:11:49.367 =================================================================================================================== 00:11:49.367 Total : 13155.00 51.39 0.00 0.00 0.00 0.00 0.00 00:11:49.367 00:11:50.335 21:26:15 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 65ad3e5f-76d2-4bec-859d-dc0833278eac 00:11:50.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:50.602 Nvme0n1 : 2.00 13305.50 51.97 0.00 0.00 0.00 0.00 0.00 00:11:50.602 =================================================================================================================== 00:11:50.602 Total : 13305.50 51.97 0.00 0.00 0.00 0.00 0.00 00:11:50.602 00:11:50.602 true 00:11:50.602 21:26:16 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65ad3e5f-76d2-4bec-859d-dc0833278eac 00:11:50.602 21:26:16 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:50.863 21:26:16 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:50.863 21:26:16 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:50.863 21:26:16 -- target/nvmf_lvs_grow.sh@65 -- # wait 2571229 00:11:51.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:51.428 Nvme0n1 : 3.00 13345.00 52.13 0.00 0.00 0.00 0.00 0.00 00:11:51.428 =================================================================================================================== 00:11:51.428 Total : 13345.00 52.13 0.00 0.00 0.00 0.00 0.00 00:11:51.428 00:11:52.363 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:52.363 Nvme0n1 : 4.00 13390.75 52.31 0.00 0.00 0.00 0.00 0.00 00:11:52.363 =================================================================================================================== 00:11:52.363 Total : 13390.75 52.31 0.00 0.00 0.00 0.00 0.00 00:11:52.363 00:11:53.738 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:53.738 Nvme0n1 : 5.00 13445.40 52.52 0.00 0.00 0.00 0.00 0.00 00:11:53.738 =================================================================================================================== 00:11:53.738 Total : 13445.40 52.52 0.00 0.00 0.00 0.00 0.00 00:11:53.738 00:11:54.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:54.673 Nvme0n1 : 6.00 13495.17 52.72 0.00 0.00 0.00 0.00 0.00 00:11:54.673 =================================================================================================================== 00:11:54.673 Total : 13495.17 52.72 0.00 0.00 0.00 0.00 0.00 00:11:54.673 00:11:55.607 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:55.607 Nvme0n1 : 7.00 13506.71 52.76 0.00 0.00 0.00 0.00 0.00 00:11:55.607 =================================================================================================================== 00:11:55.607 Total : 13506.71 52.76 0.00 0.00 0.00 0.00 0.00 00:11:55.607 00:11:56.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.542 Nvme0n1 : 8.00 13526.38 52.84 0.00 0.00 0.00 0.00 0.00 00:11:56.542 =================================================================================================================== 00:11:56.542 Total : 13526.38 52.84 0.00 0.00 0.00 0.00 0.00 00:11:56.542 00:11:57.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:57.478 Nvme0n1 : 9.00 13546.11 52.91 0.00 0.00 0.00 0.00 0.00 00:11:57.478 =================================================================================================================== 00:11:57.478 Total : 13546.11 52.91 0.00 0.00 0.00 0.00 0.00 00:11:57.478 00:11:58.414 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:58.414 Nvme0n1 : 10.00 13562.70 52.98 0.00 0.00 0.00 0.00 0.00 00:11:58.414 =================================================================================================================== 00:11:58.414 Total : 13562.70 52.98 0.00 0.00 0.00 0.00 0.00 00:11:58.414 00:11:58.414 00:11:58.414 Latency(us) 00:11:58.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:58.414 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:58.414 Nvme0n1 : 10.01 13562.62 52.98 0.00 0.00 9429.09 3276.80 12815.93 00:11:58.414 =================================================================================================================== 00:11:58.414 Total : 13562.62 52.98 0.00 0.00 9429.09 3276.80 12815.93 00:11:58.414 0 00:11:58.414 21:26:24 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2571094 00:11:58.414 21:26:24 -- common/autotest_common.sh@936 -- # '[' -z 2571094 ']' 00:11:58.414 21:26:24 -- common/autotest_common.sh@940 -- # kill -0 2571094 00:11:58.414 21:26:24 -- common/autotest_common.sh@941 -- # uname 00:11:58.414 21:26:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:58.414 21:26:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2571094 00:11:58.414 21:26:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:58.414 21:26:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:58.414 21:26:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2571094' 00:11:58.414 killing process with pid 2571094 00:11:58.414 21:26:24 -- common/autotest_common.sh@955 -- # kill 2571094 00:11:58.414 Received shutdown signal, test time was about 10.000000 seconds 00:11:58.414 00:11:58.414 Latency(us) 00:11:58.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:58.414 =================================================================================================================== 00:11:58.414 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:58.414 21:26:24 -- common/autotest_common.sh@960 -- # wait 2571094 00:11:58.980 21:26:24 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:58.980 21:26:24 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:59.237 21:26:24 -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65ad3e5f-76d2-4bec-859d-dc0833278eac 00:11:59.237 21:26:24 -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:59.495 21:26:25 -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:59.495 21:26:25 -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:59.495 21:26:25 -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2568458 00:11:59.495 21:26:25 -- target/nvmf_lvs_grow.sh@75 -- # wait 2568458 00:11:59.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2568458 Killed "${NVMF_APP[@]}" "$@" 00:11:59.753 21:26:25 -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:59.753 21:26:25 -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:59.753 21:26:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:59.753 21:26:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:59.753 21:26:25 -- common/autotest_common.sh@10 -- # set +x 00:11:59.753 21:26:25 -- nvmf/common.sh@470 -- # nvmfpid=2572563 00:11:59.753 21:26:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:59.753 21:26:25 -- nvmf/common.sh@471 -- # waitforlisten 2572563 00:11:59.753 21:26:25 -- common/autotest_common.sh@817 -- # '[' -z 2572563 ']' 00:11:59.753 21:26:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.753 21:26:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:59.753 21:26:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.753 21:26:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:59.753 21:26:25 -- common/autotest_common.sh@10 -- # set +x 00:11:59.753 [2024-04-24 21:26:25.240110] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:11:59.753 [2024-04-24 21:26:25.240188] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.753 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.753 [2024-04-24 21:26:25.306558] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.753 [2024-04-24 21:26:25.413812] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.753 [2024-04-24 21:26:25.413875] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.753 [2024-04-24 21:26:25.413890] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.753 [2024-04-24 21:26:25.413901] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.753 [2024-04-24 21:26:25.413926] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.753 [2024-04-24 21:26:25.413956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.011 21:26:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:00.011 21:26:25 -- common/autotest_common.sh@850 -- # return 0 00:12:00.011 21:26:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:00.011 21:26:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:00.011 21:26:25 -- common/autotest_common.sh@10 -- # set +x 00:12:00.011 21:26:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.011 21:26:25 -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:00.269 [2024-04-24 21:26:25.781253] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:00.269 [2024-04-24 21:26:25.781397] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:00.269 [2024-04-24 21:26:25.781461] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:00.269 21:26:25 -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:00.269 21:26:25 -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8d348f04-1b06-4d04-9b43-45c37102f65a 00:12:00.269 21:26:25 -- common/autotest_common.sh@885 -- # local bdev_name=8d348f04-1b06-4d04-9b43-45c37102f65a 00:12:00.269 21:26:25 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:12:00.269 21:26:25 -- common/autotest_common.sh@887 -- # local i 00:12:00.269 21:26:25 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:12:00.269 21:26:25 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:12:00.269 21:26:25 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:00.527 21:26:26 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8d348f04-1b06-4d04-9b43-45c37102f65a -t 2000 00:12:00.785 [ 00:12:00.785 { 00:12:00.785 "name": "8d348f04-1b06-4d04-9b43-45c37102f65a", 00:12:00.785 "aliases": [ 00:12:00.785 "lvs/lvol" 00:12:00.785 ], 00:12:00.785 "product_name": "Logical Volume", 00:12:00.785 "block_size": 4096, 00:12:00.785 "num_blocks": 38912, 00:12:00.785 "uuid": "8d348f04-1b06-4d04-9b43-45c37102f65a", 00:12:00.785 "assigned_rate_limits": { 00:12:00.785 "rw_ios_per_sec": 0, 00:12:00.785 "rw_mbytes_per_sec": 0, 00:12:00.785 "r_mbytes_per_sec": 0, 00:12:00.785 "w_mbytes_per_sec": 0 00:12:00.785 }, 00:12:00.785 "claimed": false, 00:12:00.785 "zoned": false, 00:12:00.785 "supported_io_types": { 00:12:00.785 "read": true, 00:12:00.785 "write": true, 00:12:00.785 "unmap": true, 00:12:00.785 "write_zeroes": true, 00:12:00.785 "flush": false, 00:12:00.785 "reset": true, 00:12:00.785 "compare": false, 00:12:00.785 "compare_and_write": false, 00:12:00.785 "abort": false, 00:12:00.785 "nvme_admin": false, 00:12:00.785 "nvme_io": false 00:12:00.785 }, 00:12:00.785 "driver_specific": { 00:12:00.785 "lvol": { 00:12:00.785 "lvol_store_uuid": "65ad3e5f-76d2-4bec-859d-dc0833278eac", 00:12:00.785 "base_bdev": "aio_bdev", 00:12:00.785 "thin_provision": false, 00:12:00.785 "snapshot": false, 00:12:00.785 "clone": false, 00:12:00.785 "esnap_clone": false 00:12:00.785 } 00:12:00.785 } 00:12:00.785 } 00:12:00.785 ] 00:12:00.785 21:26:26 -- common/autotest_common.sh@893 -- # return 0 00:12:00.785 21:26:26 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65ad3e5f-76d2-4bec-859d-dc0833278eac 00:12:00.785 21:26:26 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:01.043 21:26:26 -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:01.043 21:26:26 -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65ad3e5f-76d2-4bec-859d-dc0833278eac 00:12:01.043 21:26:26 -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:01.304 21:26:26 -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:01.304 21:26:26 -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:01.564 [2024-04-24 21:26:27.010302] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:01.564 21:26:27 -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65ad3e5f-76d2-4bec-859d-dc0833278eac 00:12:01.564 21:26:27 -- common/autotest_common.sh@638 -- # local es=0 00:12:01.564 21:26:27 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65ad3e5f-76d2-4bec-859d-dc0833278eac 00:12:01.564 21:26:27 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:01.564 21:26:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:01.564 21:26:27 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:01.564 21:26:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:01.564 21:26:27 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:01.564 21:26:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:01.564 21:26:27 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:01.564 21:26:27 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:01.564 21:26:27 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65ad3e5f-76d2-4bec-859d-dc0833278eac 00:12:01.821 request: 00:12:01.821 { 00:12:01.821 "uuid": "65ad3e5f-76d2-4bec-859d-dc0833278eac", 00:12:01.821 "method": "bdev_lvol_get_lvstores", 00:12:01.821 "req_id": 1 00:12:01.821 } 00:12:01.821 Got JSON-RPC error response 00:12:01.821 response: 00:12:01.821 { 00:12:01.821 "code": -19, 00:12:01.821 "message": "No such device" 00:12:01.821 } 00:12:01.821 21:26:27 -- common/autotest_common.sh@641 -- # es=1 00:12:01.821 21:26:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:01.821 21:26:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:01.821 21:26:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:01.821 21:26:27 -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:02.079 aio_bdev 00:12:02.079 21:26:27 -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8d348f04-1b06-4d04-9b43-45c37102f65a 00:12:02.079 21:26:27 -- common/autotest_common.sh@885 -- # local bdev_name=8d348f04-1b06-4d04-9b43-45c37102f65a 00:12:02.079 21:26:27 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:12:02.079 21:26:27 -- common/autotest_common.sh@887 -- # local i 00:12:02.079 21:26:27 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:12:02.079 21:26:27 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:12:02.079 21:26:27 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:02.350 21:26:27 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8d348f04-1b06-4d04-9b43-45c37102f65a -t 2000 00:12:02.608 [ 00:12:02.608 { 00:12:02.608 "name": "8d348f04-1b06-4d04-9b43-45c37102f65a", 00:12:02.608 "aliases": [ 00:12:02.608 "lvs/lvol" 00:12:02.608 ], 00:12:02.608 "product_name": "Logical Volume", 00:12:02.608 "block_size": 4096, 00:12:02.608 "num_blocks": 38912, 00:12:02.608 "uuid": "8d348f04-1b06-4d04-9b43-45c37102f65a", 00:12:02.608 "assigned_rate_limits": { 00:12:02.608 "rw_ios_per_sec": 0, 00:12:02.608 "rw_mbytes_per_sec": 0, 00:12:02.608 "r_mbytes_per_sec": 0, 00:12:02.608 "w_mbytes_per_sec": 0 00:12:02.608 }, 00:12:02.608 "claimed": false, 00:12:02.608 "zoned": false, 00:12:02.608 "supported_io_types": { 00:12:02.608 "read": true, 00:12:02.608 "write": true, 00:12:02.608 "unmap": true, 00:12:02.608 "write_zeroes": true, 00:12:02.608 "flush": false, 00:12:02.608 "reset": true, 00:12:02.608 "compare": false, 00:12:02.608 "compare_and_write": false, 00:12:02.608 "abort": false, 00:12:02.608 "nvme_admin": false, 00:12:02.608 "nvme_io": false 00:12:02.608 }, 00:12:02.608 "driver_specific": { 00:12:02.608 "lvol": { 00:12:02.608 "lvol_store_uuid": "65ad3e5f-76d2-4bec-859d-dc0833278eac", 00:12:02.608 "base_bdev": "aio_bdev", 00:12:02.608 "thin_provision": false, 00:12:02.608 "snapshot": false, 00:12:02.608 "clone": false, 00:12:02.608 "esnap_clone": false 00:12:02.608 } 00:12:02.608 } 00:12:02.608 } 00:12:02.608 ] 00:12:02.608 21:26:28 -- common/autotest_common.sh@893 -- # return 0 00:12:02.608 21:26:28 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65ad3e5f-76d2-4bec-859d-dc0833278eac 00:12:02.608 21:26:28 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:02.868 21:26:28 -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:02.868 21:26:28 -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65ad3e5f-76d2-4bec-859d-dc0833278eac 00:12:02.868 21:26:28 -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:03.126 21:26:28 -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:03.126 21:26:28 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8d348f04-1b06-4d04-9b43-45c37102f65a 00:12:03.385 21:26:28 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 65ad3e5f-76d2-4bec-859d-dc0833278eac 00:12:03.646 21:26:29 -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:03.906 21:26:29 -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:03.906 00:12:03.906 real 0m19.145s 00:12:03.906 user 0m48.351s 00:12:03.906 sys 0m5.288s 00:12:03.906 21:26:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:03.906 21:26:29 -- common/autotest_common.sh@10 -- # set +x 00:12:03.906 ************************************ 00:12:03.906 END TEST lvs_grow_dirty 00:12:03.906 ************************************ 00:12:03.906 21:26:29 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:03.906 21:26:29 -- common/autotest_common.sh@794 -- # type=--id 00:12:03.906 21:26:29 -- common/autotest_common.sh@795 -- # id=0 00:12:03.906 21:26:29 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:12:03.906 21:26:29 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:03.906 21:26:29 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:12:03.906 21:26:29 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:12:03.906 21:26:29 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:12:03.906 21:26:29 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:03.906 nvmf_trace.0 00:12:03.906 21:26:29 -- common/autotest_common.sh@809 -- # return 0 00:12:03.906 21:26:29 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:03.906 21:26:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:03.906 21:26:29 -- nvmf/common.sh@117 -- # sync 00:12:03.906 21:26:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:03.906 21:26:29 -- nvmf/common.sh@120 -- # set +e 00:12:03.906 21:26:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:03.906 21:26:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:03.906 rmmod nvme_tcp 00:12:03.906 rmmod nvme_fabrics 00:12:03.906 rmmod nvme_keyring 00:12:03.906 21:26:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:03.906 21:26:29 -- nvmf/common.sh@124 -- # set -e 00:12:03.906 21:26:29 -- nvmf/common.sh@125 -- # return 0 00:12:03.906 21:26:29 -- nvmf/common.sh@478 -- # '[' -n 2572563 ']' 00:12:03.906 21:26:29 -- nvmf/common.sh@479 -- # killprocess 2572563 00:12:03.906 21:26:29 -- common/autotest_common.sh@936 -- # '[' -z 2572563 ']' 00:12:03.906 21:26:29 -- common/autotest_common.sh@940 -- # kill -0 2572563 00:12:03.906 21:26:29 -- common/autotest_common.sh@941 -- # uname 00:12:03.906 21:26:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:03.906 21:26:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2572563 00:12:03.906 21:26:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:03.906 21:26:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:03.906 21:26:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2572563' 00:12:03.906 killing process with pid 2572563 00:12:03.906 21:26:29 -- common/autotest_common.sh@955 -- # kill 2572563 00:12:03.906 21:26:29 -- common/autotest_common.sh@960 -- # wait 2572563 00:12:04.473 21:26:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:04.473 21:26:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:04.473 21:26:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:04.473 21:26:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:04.473 21:26:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:04.473 21:26:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.473 21:26:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.473 21:26:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.375 21:26:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:06.375 00:12:06.375 real 0m43.069s 00:12:06.375 user 1m11.536s 00:12:06.375 sys 0m9.157s 00:12:06.375 21:26:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:06.375 21:26:31 -- common/autotest_common.sh@10 -- # set +x 00:12:06.375 ************************************ 00:12:06.375 END TEST nvmf_lvs_grow 00:12:06.375 ************************************ 00:12:06.375 21:26:31 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:06.375 21:26:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:06.375 21:26:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:06.375 21:26:31 -- common/autotest_common.sh@10 -- # set +x 00:12:06.375 ************************************ 00:12:06.375 START TEST nvmf_bdev_io_wait 00:12:06.375 ************************************ 00:12:06.375 21:26:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:06.633 * Looking for test storage... 00:12:06.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.633 21:26:32 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:06.633 21:26:32 -- nvmf/common.sh@7 -- # uname -s 00:12:06.633 21:26:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.633 21:26:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.633 21:26:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.633 21:26:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.633 21:26:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.633 21:26:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.633 21:26:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.633 21:26:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.633 21:26:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.633 21:26:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.633 21:26:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:06.633 21:26:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:06.633 21:26:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.633 21:26:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.633 21:26:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:06.633 21:26:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:06.633 21:26:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:06.633 21:26:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.633 21:26:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.633 21:26:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.633 21:26:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.633 21:26:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.634 21:26:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.634 21:26:32 -- paths/export.sh@5 -- # export PATH 00:12:06.634 21:26:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.634 21:26:32 -- nvmf/common.sh@47 -- # : 0 00:12:06.634 21:26:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:06.634 21:26:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:06.634 21:26:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:06.634 21:26:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.634 21:26:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.634 21:26:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:06.634 21:26:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:06.634 21:26:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:06.634 21:26:32 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:06.634 21:26:32 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:06.634 21:26:32 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:06.634 21:26:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:06.634 21:26:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.634 21:26:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:06.634 21:26:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:06.634 21:26:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:06.634 21:26:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.634 21:26:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.634 21:26:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.634 21:26:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:06.634 21:26:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:06.634 21:26:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:06.634 21:26:32 -- common/autotest_common.sh@10 -- # set +x 00:12:08.545 21:26:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:08.545 21:26:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:08.545 21:26:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:08.545 21:26:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:08.545 21:26:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:08.545 21:26:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:08.545 21:26:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:08.545 21:26:34 -- nvmf/common.sh@295 -- # net_devs=() 00:12:08.545 21:26:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:08.545 21:26:34 -- nvmf/common.sh@296 -- # e810=() 00:12:08.545 21:26:34 -- nvmf/common.sh@296 -- # local -ga e810 00:12:08.545 21:26:34 -- nvmf/common.sh@297 -- # x722=() 00:12:08.545 21:26:34 -- nvmf/common.sh@297 -- # local -ga x722 00:12:08.545 21:26:34 -- nvmf/common.sh@298 -- # mlx=() 00:12:08.545 21:26:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:08.545 21:26:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.545 21:26:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.545 21:26:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.545 21:26:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.545 21:26:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.545 21:26:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.545 21:26:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.545 21:26:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.545 21:26:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.545 21:26:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.545 21:26:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.545 21:26:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:08.545 21:26:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:08.545 21:26:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:08.545 21:26:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:08.545 21:26:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:08.545 21:26:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:08.545 21:26:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:08.545 21:26:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:08.545 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:08.545 21:26:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:08.545 21:26:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:08.545 21:26:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.545 21:26:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.545 21:26:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:08.545 21:26:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:08.545 21:26:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:08.545 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:08.545 21:26:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:08.545 21:26:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:08.545 21:26:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.545 21:26:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.545 21:26:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:08.545 21:26:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:08.545 21:26:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:08.545 21:26:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:08.545 21:26:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:08.545 21:26:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.545 21:26:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:08.545 21:26:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.545 21:26:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:08.545 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:08.545 21:26:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.545 21:26:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:08.545 21:26:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.545 21:26:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:08.545 21:26:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.545 21:26:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:08.545 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:08.545 21:26:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.545 21:26:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:08.545 21:26:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:08.545 21:26:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:08.545 21:26:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:08.545 21:26:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:08.545 21:26:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.545 21:26:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.545 21:26:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:08.545 21:26:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:08.545 21:26:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:08.545 21:26:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:08.545 21:26:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:08.545 21:26:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:08.545 21:26:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.546 21:26:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:08.546 21:26:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:08.546 21:26:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:08.546 21:26:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.546 21:26:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.546 21:26:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.546 21:26:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:08.546 21:26:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.805 21:26:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.805 21:26:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.805 21:26:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:08.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:12:08.805 00:12:08.805 --- 10.0.0.2 ping statistics --- 00:12:08.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.805 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:12:08.805 21:26:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:12:08.805 00:12:08.805 --- 10.0.0.1 ping statistics --- 00:12:08.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.805 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:12:08.805 21:26:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.805 21:26:34 -- nvmf/common.sh@411 -- # return 0 00:12:08.805 21:26:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:08.805 21:26:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.805 21:26:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:08.805 21:26:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:08.805 21:26:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.805 21:26:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:08.805 21:26:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:08.805 21:26:34 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:08.805 21:26:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:08.805 21:26:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:08.805 21:26:34 -- common/autotest_common.sh@10 -- # set +x 00:12:08.805 21:26:34 -- nvmf/common.sh@470 -- # nvmfpid=2575097 00:12:08.805 21:26:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:08.805 21:26:34 -- nvmf/common.sh@471 -- # waitforlisten 2575097 00:12:08.805 21:26:34 -- common/autotest_common.sh@817 -- # '[' -z 2575097 ']' 00:12:08.805 21:26:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.805 21:26:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:08.805 21:26:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.805 21:26:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:08.805 21:26:34 -- common/autotest_common.sh@10 -- # set +x 00:12:08.805 [2024-04-24 21:26:34.364298] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:12:08.805 [2024-04-24 21:26:34.364389] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.805 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.805 [2024-04-24 21:26:34.436880] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.063 [2024-04-24 21:26:34.558383] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.063 [2024-04-24 21:26:34.558441] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.063 [2024-04-24 21:26:34.558458] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.063 [2024-04-24 21:26:34.558472] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.063 [2024-04-24 21:26:34.558484] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.063 [2024-04-24 21:26:34.558583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.063 [2024-04-24 21:26:34.558643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.063 [2024-04-24 21:26:34.558680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.063 [2024-04-24 21:26:34.558686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.629 21:26:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:09.629 21:26:35 -- common/autotest_common.sh@850 -- # return 0 00:12:09.629 21:26:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:09.629 21:26:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:09.629 21:26:35 -- common/autotest_common.sh@10 -- # set +x 00:12:09.888 21:26:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.888 21:26:35 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:09.888 21:26:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.888 21:26:35 -- common/autotest_common.sh@10 -- # set +x 00:12:09.888 21:26:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.888 21:26:35 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:09.888 21:26:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.888 21:26:35 -- common/autotest_common.sh@10 -- # set +x 00:12:09.888 21:26:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.888 21:26:35 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:09.888 21:26:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.888 21:26:35 -- common/autotest_common.sh@10 -- # set +x 00:12:09.888 [2024-04-24 21:26:35.401725] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.888 21:26:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.888 21:26:35 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:09.888 21:26:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.888 21:26:35 -- common/autotest_common.sh@10 -- # set +x 00:12:09.888 Malloc0 00:12:09.888 21:26:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.888 21:26:35 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:09.888 21:26:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.888 21:26:35 -- common/autotest_common.sh@10 -- # set +x 00:12:09.888 21:26:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.888 21:26:35 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:09.888 21:26:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.888 21:26:35 -- common/autotest_common.sh@10 -- # set +x 00:12:09.888 21:26:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.888 21:26:35 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.888 21:26:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.888 21:26:35 -- common/autotest_common.sh@10 -- # set +x 00:12:09.888 [2024-04-24 21:26:35.465376] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.888 21:26:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.888 21:26:35 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2575248 00:12:09.888 21:26:35 -- target/bdev_io_wait.sh@30 -- # READ_PID=2575249 00:12:09.888 21:26:35 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:09.888 21:26:35 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:09.888 21:26:35 -- nvmf/common.sh@521 -- # config=() 00:12:09.888 21:26:35 -- nvmf/common.sh@521 -- # local subsystem config 00:12:09.888 21:26:35 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2575252 00:12:09.888 21:26:35 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:09.888 21:26:35 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:09.888 21:26:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:09.888 21:26:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:09.888 { 00:12:09.888 "params": { 00:12:09.888 "name": "Nvme$subsystem", 00:12:09.888 "trtype": "$TEST_TRANSPORT", 00:12:09.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:09.888 "adrfam": "ipv4", 00:12:09.888 "trsvcid": "$NVMF_PORT", 00:12:09.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:09.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:09.888 "hdgst": ${hdgst:-false}, 00:12:09.888 "ddgst": ${ddgst:-false} 00:12:09.888 }, 00:12:09.888 "method": "bdev_nvme_attach_controller" 00:12:09.888 } 00:12:09.888 EOF 00:12:09.888 )") 00:12:09.888 21:26:35 -- nvmf/common.sh@521 -- # config=() 00:12:09.888 21:26:35 -- nvmf/common.sh@521 -- # local subsystem config 00:12:09.888 21:26:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:09.888 21:26:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:09.888 { 00:12:09.888 "params": { 00:12:09.888 "name": "Nvme$subsystem", 00:12:09.888 "trtype": "$TEST_TRANSPORT", 00:12:09.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:09.888 "adrfam": "ipv4", 00:12:09.888 "trsvcid": "$NVMF_PORT", 00:12:09.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:09.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:09.888 "hdgst": ${hdgst:-false}, 00:12:09.888 "ddgst": ${ddgst:-false} 00:12:09.888 }, 00:12:09.888 "method": "bdev_nvme_attach_controller" 00:12:09.888 } 00:12:09.888 EOF 00:12:09.888 )") 00:12:09.888 21:26:35 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2575254 00:12:09.888 21:26:35 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:09.888 21:26:35 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:09.888 21:26:35 -- target/bdev_io_wait.sh@35 -- # sync 00:12:09.888 21:26:35 -- nvmf/common.sh@521 -- # config=() 00:12:09.888 21:26:35 -- nvmf/common.sh@521 -- # local subsystem config 00:12:09.888 21:26:35 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:09.888 21:26:35 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:09.888 21:26:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:09.888 21:26:35 -- nvmf/common.sh@543 -- # cat 00:12:09.888 21:26:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:09.888 { 00:12:09.888 "params": { 00:12:09.888 "name": "Nvme$subsystem", 00:12:09.888 "trtype": "$TEST_TRANSPORT", 00:12:09.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:09.888 "adrfam": "ipv4", 00:12:09.888 "trsvcid": "$NVMF_PORT", 00:12:09.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:09.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:09.888 "hdgst": ${hdgst:-false}, 00:12:09.888 "ddgst": ${ddgst:-false} 00:12:09.888 }, 00:12:09.888 "method": "bdev_nvme_attach_controller" 00:12:09.888 } 00:12:09.888 EOF 00:12:09.888 )") 00:12:09.889 21:26:35 -- nvmf/common.sh@521 -- # config=() 00:12:09.889 21:26:35 -- nvmf/common.sh@521 -- # local subsystem config 00:12:09.889 21:26:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:09.889 21:26:35 -- nvmf/common.sh@543 -- # cat 00:12:09.889 21:26:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:09.889 { 00:12:09.889 "params": { 00:12:09.889 "name": "Nvme$subsystem", 00:12:09.889 "trtype": "$TEST_TRANSPORT", 00:12:09.889 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:09.889 "adrfam": "ipv4", 00:12:09.889 "trsvcid": "$NVMF_PORT", 00:12:09.889 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:09.889 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:09.889 "hdgst": ${hdgst:-false}, 00:12:09.889 "ddgst": ${ddgst:-false} 00:12:09.889 }, 00:12:09.889 "method": "bdev_nvme_attach_controller" 00:12:09.889 } 00:12:09.889 EOF 00:12:09.889 )") 00:12:09.889 21:26:35 -- nvmf/common.sh@543 -- # cat 00:12:09.889 21:26:35 -- target/bdev_io_wait.sh@37 -- # wait 2575248 00:12:09.889 21:26:35 -- nvmf/common.sh@543 -- # cat 00:12:09.889 21:26:35 -- nvmf/common.sh@545 -- # jq . 00:12:09.889 21:26:35 -- nvmf/common.sh@545 -- # jq . 00:12:09.889 21:26:35 -- nvmf/common.sh@545 -- # jq . 00:12:09.889 21:26:35 -- nvmf/common.sh@546 -- # IFS=, 00:12:09.889 21:26:35 -- nvmf/common.sh@545 -- # jq . 00:12:09.889 21:26:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:09.889 "params": { 00:12:09.889 "name": "Nvme1", 00:12:09.889 "trtype": "tcp", 00:12:09.889 "traddr": "10.0.0.2", 00:12:09.889 "adrfam": "ipv4", 00:12:09.889 "trsvcid": "4420", 00:12:09.889 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.889 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:09.889 "hdgst": false, 00:12:09.889 "ddgst": false 00:12:09.889 }, 00:12:09.889 "method": "bdev_nvme_attach_controller" 00:12:09.889 }' 00:12:09.889 21:26:35 -- nvmf/common.sh@546 -- # IFS=, 00:12:09.889 21:26:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:09.889 "params": { 00:12:09.889 "name": "Nvme1", 00:12:09.889 "trtype": "tcp", 00:12:09.889 "traddr": "10.0.0.2", 00:12:09.889 "adrfam": "ipv4", 00:12:09.889 "trsvcid": "4420", 00:12:09.889 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.889 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:09.889 "hdgst": false, 00:12:09.889 "ddgst": false 00:12:09.889 }, 00:12:09.889 "method": "bdev_nvme_attach_controller" 00:12:09.889 }' 00:12:09.889 21:26:35 -- nvmf/common.sh@546 -- # IFS=, 00:12:09.889 21:26:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:09.889 "params": { 00:12:09.889 "name": "Nvme1", 00:12:09.889 "trtype": "tcp", 00:12:09.889 "traddr": "10.0.0.2", 00:12:09.889 "adrfam": "ipv4", 00:12:09.889 "trsvcid": "4420", 00:12:09.889 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.889 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:09.889 "hdgst": false, 00:12:09.889 "ddgst": false 00:12:09.889 }, 00:12:09.889 "method": "bdev_nvme_attach_controller" 00:12:09.889 }' 00:12:09.889 21:26:35 -- nvmf/common.sh@546 -- # IFS=, 00:12:09.889 21:26:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:09.889 "params": { 00:12:09.889 "name": "Nvme1", 00:12:09.889 "trtype": "tcp", 00:12:09.889 "traddr": "10.0.0.2", 00:12:09.889 "adrfam": "ipv4", 00:12:09.889 "trsvcid": "4420", 00:12:09.889 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.889 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:09.889 "hdgst": false, 00:12:09.889 "ddgst": false 00:12:09.889 }, 00:12:09.889 "method": "bdev_nvme_attach_controller" 00:12:09.889 }' 00:12:09.889 [2024-04-24 21:26:35.512844] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:12:09.889 [2024-04-24 21:26:35.512917] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:09.889 [2024-04-24 21:26:35.513009] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:12:09.889 [2024-04-24 21:26:35.513008] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:12:09.889 [2024-04-24 21:26:35.513009] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:12:09.889 [2024-04-24 21:26:35.513090] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-24 21:26:35.513091] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-24 21:26:35.513091] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:09.889 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:09.889 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:09.889 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.147 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.147 [2024-04-24 21:26:35.686494] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.147 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.147 [2024-04-24 21:26:35.784791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:10.147 [2024-04-24 21:26:35.793816] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.405 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.405 [2024-04-24 21:26:35.889145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:10.405 [2024-04-24 21:26:35.893963] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.405 [2024-04-24 21:26:35.959378] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.405 [2024-04-24 21:26:35.985386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:10.405 [2024-04-24 21:26:36.047747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:12:10.663 Running I/O for 1 seconds... 00:12:10.663 Running I/O for 1 seconds... 00:12:10.663 Running I/O for 1 seconds... 00:12:10.663 Running I/O for 1 seconds... 00:12:11.596 00:12:11.596 Latency(us) 00:12:11.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:11.596 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:11.596 Nvme1n1 : 1.00 196118.69 766.09 0.00 0.00 650.19 254.86 904.15 00:12:11.596 =================================================================================================================== 00:12:11.596 Total : 196118.69 766.09 0.00 0.00 650.19 254.86 904.15 00:12:11.596 00:12:11.596 Latency(us) 00:12:11.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:11.596 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:11.596 Nvme1n1 : 1.01 9159.29 35.78 0.00 0.00 13895.10 8301.23 26408.58 00:12:11.596 =================================================================================================================== 00:12:11.596 Total : 9159.29 35.78 0.00 0.00 13895.10 8301.23 26408.58 00:12:11.596 00:12:11.596 Latency(us) 00:12:11.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:11.596 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:11.596 Nvme1n1 : 1.01 7388.70 28.86 0.00 0.00 17218.75 8786.68 28156.21 00:12:11.596 =================================================================================================================== 00:12:11.596 Total : 7388.70 28.86 0.00 0.00 17218.75 8786.68 28156.21 00:12:11.596 00:12:11.597 Latency(us) 00:12:11.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:11.597 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:11.597 Nvme1n1 : 1.01 8627.37 33.70 0.00 0.00 14755.17 9806.13 27573.67 00:12:11.597 =================================================================================================================== 00:12:11.597 Total : 8627.37 33.70 0.00 0.00 14755.17 9806.13 27573.67 00:12:12.162 21:26:37 -- target/bdev_io_wait.sh@38 -- # wait 2575249 00:12:12.162 21:26:37 -- target/bdev_io_wait.sh@39 -- # wait 2575252 00:12:12.162 21:26:37 -- target/bdev_io_wait.sh@40 -- # wait 2575254 00:12:12.162 21:26:37 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:12.162 21:26:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:12.162 21:26:37 -- common/autotest_common.sh@10 -- # set +x 00:12:12.162 21:26:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:12.162 21:26:37 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:12.162 21:26:37 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:12.162 21:26:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:12.162 21:26:37 -- nvmf/common.sh@117 -- # sync 00:12:12.162 21:26:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:12.162 21:26:37 -- nvmf/common.sh@120 -- # set +e 00:12:12.162 21:26:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:12.162 21:26:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:12.162 rmmod nvme_tcp 00:12:12.162 rmmod nvme_fabrics 00:12:12.162 rmmod nvme_keyring 00:12:12.162 21:26:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:12.162 21:26:37 -- nvmf/common.sh@124 -- # set -e 00:12:12.162 21:26:37 -- nvmf/common.sh@125 -- # return 0 00:12:12.162 21:26:37 -- nvmf/common.sh@478 -- # '[' -n 2575097 ']' 00:12:12.162 21:26:37 -- nvmf/common.sh@479 -- # killprocess 2575097 00:12:12.162 21:26:37 -- common/autotest_common.sh@936 -- # '[' -z 2575097 ']' 00:12:12.162 21:26:37 -- common/autotest_common.sh@940 -- # kill -0 2575097 00:12:12.162 21:26:37 -- common/autotest_common.sh@941 -- # uname 00:12:12.162 21:26:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:12.162 21:26:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2575097 00:12:12.162 21:26:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:12.162 21:26:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:12.162 21:26:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2575097' 00:12:12.162 killing process with pid 2575097 00:12:12.163 21:26:37 -- common/autotest_common.sh@955 -- # kill 2575097 00:12:12.163 21:26:37 -- common/autotest_common.sh@960 -- # wait 2575097 00:12:12.421 21:26:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:12.421 21:26:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:12.421 21:26:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:12.421 21:26:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:12.421 21:26:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:12.421 21:26:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.421 21:26:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:12.421 21:26:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.954 21:26:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:14.954 00:12:14.954 real 0m8.040s 00:12:14.954 user 0m19.557s 00:12:14.954 sys 0m3.693s 00:12:14.954 21:26:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:14.954 21:26:40 -- common/autotest_common.sh@10 -- # set +x 00:12:14.954 ************************************ 00:12:14.954 END TEST nvmf_bdev_io_wait 00:12:14.954 ************************************ 00:12:14.954 21:26:40 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:14.954 21:26:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:14.954 21:26:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:14.954 21:26:40 -- common/autotest_common.sh@10 -- # set +x 00:12:14.954 ************************************ 00:12:14.954 START TEST nvmf_queue_depth 00:12:14.954 ************************************ 00:12:14.954 21:26:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:14.954 * Looking for test storage... 00:12:14.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.954 21:26:40 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.954 21:26:40 -- nvmf/common.sh@7 -- # uname -s 00:12:14.954 21:26:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.954 21:26:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.954 21:26:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.954 21:26:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.954 21:26:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.954 21:26:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.954 21:26:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.954 21:26:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.954 21:26:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.954 21:26:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.954 21:26:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:14.954 21:26:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:14.954 21:26:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.954 21:26:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.954 21:26:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.954 21:26:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.954 21:26:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.954 21:26:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.954 21:26:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.954 21:26:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.954 21:26:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.954 21:26:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.954 21:26:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.954 21:26:40 -- paths/export.sh@5 -- # export PATH 00:12:14.954 21:26:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.954 21:26:40 -- nvmf/common.sh@47 -- # : 0 00:12:14.954 21:26:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:14.954 21:26:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:14.954 21:26:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.954 21:26:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.954 21:26:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.954 21:26:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:14.954 21:26:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:14.954 21:26:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:14.954 21:26:40 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:14.954 21:26:40 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:14.954 21:26:40 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:14.954 21:26:40 -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:14.954 21:26:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:14.954 21:26:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.954 21:26:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:14.954 21:26:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:14.954 21:26:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:14.954 21:26:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.954 21:26:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.954 21:26:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.954 21:26:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:14.954 21:26:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:14.954 21:26:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:14.954 21:26:40 -- common/autotest_common.sh@10 -- # set +x 00:12:16.858 21:26:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:16.858 21:26:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:16.858 21:26:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:16.858 21:26:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:16.858 21:26:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:16.858 21:26:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:16.858 21:26:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:16.858 21:26:42 -- nvmf/common.sh@295 -- # net_devs=() 00:12:16.858 21:26:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:16.858 21:26:42 -- nvmf/common.sh@296 -- # e810=() 00:12:16.858 21:26:42 -- nvmf/common.sh@296 -- # local -ga e810 00:12:16.858 21:26:42 -- nvmf/common.sh@297 -- # x722=() 00:12:16.858 21:26:42 -- nvmf/common.sh@297 -- # local -ga x722 00:12:16.858 21:26:42 -- nvmf/common.sh@298 -- # mlx=() 00:12:16.858 21:26:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:16.858 21:26:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:16.858 21:26:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:16.858 21:26:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:16.858 21:26:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:16.858 21:26:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:16.858 21:26:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:16.858 21:26:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:16.858 21:26:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:16.858 21:26:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:16.858 21:26:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:16.858 21:26:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:16.858 21:26:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:16.858 21:26:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:16.858 21:26:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:16.858 21:26:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:16.858 21:26:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:16.858 21:26:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:16.858 21:26:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:16.858 21:26:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:16.858 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:16.858 21:26:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:16.858 21:26:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:16.858 21:26:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.858 21:26:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.858 21:26:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:16.858 21:26:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:16.858 21:26:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:16.858 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:16.858 21:26:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:16.858 21:26:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:16.858 21:26:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.858 21:26:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.858 21:26:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:16.858 21:26:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:16.858 21:26:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:16.858 21:26:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:16.858 21:26:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:16.858 21:26:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.858 21:26:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:16.858 21:26:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.858 21:26:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:16.858 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:16.858 21:26:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.858 21:26:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:16.858 21:26:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.858 21:26:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:16.858 21:26:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.858 21:26:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:16.858 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:16.858 21:26:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.858 21:26:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:16.858 21:26:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:16.858 21:26:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:16.858 21:26:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:16.858 21:26:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:16.858 21:26:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.858 21:26:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.858 21:26:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:16.858 21:26:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:16.858 21:26:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:16.858 21:26:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:16.858 21:26:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:16.858 21:26:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:16.858 21:26:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.858 21:26:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:16.858 21:26:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:16.858 21:26:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:16.858 21:26:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:16.858 21:26:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:16.858 21:26:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:16.858 21:26:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:16.858 21:26:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:16.858 21:26:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:16.858 21:26:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:16.858 21:26:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:16.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:12:16.858 00:12:16.858 --- 10.0.0.2 ping statistics --- 00:12:16.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.858 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:12:16.858 21:26:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:16.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:12:16.858 00:12:16.858 --- 10.0.0.1 ping statistics --- 00:12:16.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.858 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:12:16.858 21:26:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.858 21:26:42 -- nvmf/common.sh@411 -- # return 0 00:12:16.858 21:26:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:16.858 21:26:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.858 21:26:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:16.858 21:26:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:16.858 21:26:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.858 21:26:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:16.858 21:26:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:16.858 21:26:42 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:16.858 21:26:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:16.859 21:26:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:16.859 21:26:42 -- common/autotest_common.sh@10 -- # set +x 00:12:16.859 21:26:42 -- nvmf/common.sh@470 -- # nvmfpid=2577484 00:12:16.859 21:26:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:16.859 21:26:42 -- nvmf/common.sh@471 -- # waitforlisten 2577484 00:12:16.859 21:26:42 -- common/autotest_common.sh@817 -- # '[' -z 2577484 ']' 00:12:16.859 21:26:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.859 21:26:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:16.859 21:26:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.859 21:26:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:16.859 21:26:42 -- common/autotest_common.sh@10 -- # set +x 00:12:16.859 [2024-04-24 21:26:42.481663] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:12:16.859 [2024-04-24 21:26:42.481753] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.859 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.118 [2024-04-24 21:26:42.553401] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.118 [2024-04-24 21:26:42.668752] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.118 [2024-04-24 21:26:42.668817] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.118 [2024-04-24 21:26:42.668842] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.118 [2024-04-24 21:26:42.668856] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.118 [2024-04-24 21:26:42.668868] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.118 [2024-04-24 21:26:42.668902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.054 21:26:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:18.054 21:26:43 -- common/autotest_common.sh@850 -- # return 0 00:12:18.054 21:26:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:18.054 21:26:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:18.054 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:12:18.054 21:26:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.054 21:26:43 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:18.054 21:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.054 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:12:18.054 [2024-04-24 21:26:43.503773] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.054 21:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.054 21:26:43 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:18.054 21:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.054 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:12:18.054 Malloc0 00:12:18.054 21:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.054 21:26:43 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:18.054 21:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.054 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:12:18.054 21:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.054 21:26:43 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:18.054 21:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.054 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:12:18.054 21:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.054 21:26:43 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.054 21:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.054 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:12:18.054 [2024-04-24 21:26:43.569623] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.054 21:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.054 21:26:43 -- target/queue_depth.sh@30 -- # bdevperf_pid=2577636 00:12:18.054 21:26:43 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:18.054 21:26:43 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:18.054 21:26:43 -- target/queue_depth.sh@33 -- # waitforlisten 2577636 /var/tmp/bdevperf.sock 00:12:18.054 21:26:43 -- common/autotest_common.sh@817 -- # '[' -z 2577636 ']' 00:12:18.054 21:26:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:18.054 21:26:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:18.054 21:26:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:18.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:18.054 21:26:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:18.054 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:12:18.054 [2024-04-24 21:26:43.615598] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:12:18.054 [2024-04-24 21:26:43.615704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2577636 ] 00:12:18.054 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.054 [2024-04-24 21:26:43.677690] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.321 [2024-04-24 21:26:43.792960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.321 21:26:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:18.321 21:26:43 -- common/autotest_common.sh@850 -- # return 0 00:12:18.321 21:26:43 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:18.321 21:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.321 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:12:18.579 NVMe0n1 00:12:18.579 21:26:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.579 21:26:44 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:18.579 Running I/O for 10 seconds... 00:12:30.780 00:12:30.780 Latency(us) 00:12:30.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.780 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:30.780 Verification LBA range: start 0x0 length 0x4000 00:12:30.780 NVMe0n1 : 10.09 8377.54 32.72 0.00 0.00 121591.44 24369.68 76118.85 00:12:30.780 =================================================================================================================== 00:12:30.780 Total : 8377.54 32.72 0.00 0.00 121591.44 24369.68 76118.85 00:12:30.780 0 00:12:30.780 21:26:54 -- target/queue_depth.sh@39 -- # killprocess 2577636 00:12:30.780 21:26:54 -- common/autotest_common.sh@936 -- # '[' -z 2577636 ']' 00:12:30.780 21:26:54 -- common/autotest_common.sh@940 -- # kill -0 2577636 00:12:30.780 21:26:54 -- common/autotest_common.sh@941 -- # uname 00:12:30.780 21:26:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:30.780 21:26:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2577636 00:12:30.780 21:26:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:30.780 21:26:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:30.780 21:26:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2577636' 00:12:30.780 killing process with pid 2577636 00:12:30.780 21:26:54 -- common/autotest_common.sh@955 -- # kill 2577636 00:12:30.780 Received shutdown signal, test time was about 10.000000 seconds 00:12:30.780 00:12:30.780 Latency(us) 00:12:30.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.780 =================================================================================================================== 00:12:30.780 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:30.780 21:26:54 -- common/autotest_common.sh@960 -- # wait 2577636 00:12:30.780 21:26:54 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:30.780 21:26:54 -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:30.780 21:26:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:30.780 21:26:54 -- nvmf/common.sh@117 -- # sync 00:12:30.780 21:26:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:30.780 21:26:54 -- nvmf/common.sh@120 -- # set +e 00:12:30.780 21:26:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:30.780 21:26:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:30.780 rmmod nvme_tcp 00:12:30.780 rmmod nvme_fabrics 00:12:30.780 rmmod nvme_keyring 00:12:30.780 21:26:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:30.780 21:26:54 -- nvmf/common.sh@124 -- # set -e 00:12:30.780 21:26:54 -- nvmf/common.sh@125 -- # return 0 00:12:30.780 21:26:54 -- nvmf/common.sh@478 -- # '[' -n 2577484 ']' 00:12:30.780 21:26:54 -- nvmf/common.sh@479 -- # killprocess 2577484 00:12:30.780 21:26:54 -- common/autotest_common.sh@936 -- # '[' -z 2577484 ']' 00:12:30.780 21:26:54 -- common/autotest_common.sh@940 -- # kill -0 2577484 00:12:30.780 21:26:54 -- common/autotest_common.sh@941 -- # uname 00:12:30.780 21:26:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:30.780 21:26:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2577484 00:12:30.780 21:26:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:30.780 21:26:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:30.780 21:26:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2577484' 00:12:30.780 killing process with pid 2577484 00:12:30.780 21:26:54 -- common/autotest_common.sh@955 -- # kill 2577484 00:12:30.780 21:26:54 -- common/autotest_common.sh@960 -- # wait 2577484 00:12:30.780 21:26:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:30.780 21:26:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:30.780 21:26:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:30.780 21:26:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:30.780 21:26:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:30.780 21:26:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.780 21:26:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.780 21:26:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.717 21:26:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:31.717 00:12:31.717 real 0m16.900s 00:12:31.717 user 0m23.819s 00:12:31.717 sys 0m3.083s 00:12:31.717 21:26:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:31.717 21:26:57 -- common/autotest_common.sh@10 -- # set +x 00:12:31.717 ************************************ 00:12:31.717 END TEST nvmf_queue_depth 00:12:31.717 ************************************ 00:12:31.717 21:26:57 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:31.717 21:26:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:31.717 21:26:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.717 21:26:57 -- common/autotest_common.sh@10 -- # set +x 00:12:31.717 ************************************ 00:12:31.717 START TEST nvmf_multipath 00:12:31.717 ************************************ 00:12:31.717 21:26:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:31.717 * Looking for test storage... 00:12:31.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.717 21:26:57 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.717 21:26:57 -- nvmf/common.sh@7 -- # uname -s 00:12:31.717 21:26:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.717 21:26:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.717 21:26:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.717 21:26:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.717 21:26:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.717 21:26:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.717 21:26:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.717 21:26:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.717 21:26:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.717 21:26:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.717 21:26:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:31.717 21:26:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:31.717 21:26:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.717 21:26:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.717 21:26:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.717 21:26:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.717 21:26:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.717 21:26:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.717 21:26:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.717 21:26:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.718 21:26:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.718 21:26:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.718 21:26:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.718 21:26:57 -- paths/export.sh@5 -- # export PATH 00:12:31.718 21:26:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.718 21:26:57 -- nvmf/common.sh@47 -- # : 0 00:12:31.718 21:26:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.718 21:26:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.718 21:26:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.718 21:26:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.718 21:26:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.718 21:26:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.718 21:26:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.718 21:26:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.718 21:26:57 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:31.718 21:26:57 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:31.718 21:26:57 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:31.718 21:26:57 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:31.718 21:26:57 -- target/multipath.sh@43 -- # nvmftestinit 00:12:31.718 21:26:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:31.718 21:26:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.718 21:26:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:31.718 21:26:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:31.718 21:26:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:31.718 21:26:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.718 21:26:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.718 21:26:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.718 21:26:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:31.718 21:26:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:31.718 21:26:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:31.718 21:26:57 -- common/autotest_common.sh@10 -- # set +x 00:12:33.725 21:26:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:33.725 21:26:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:33.725 21:26:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:33.725 21:26:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:33.725 21:26:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:33.725 21:26:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:33.725 21:26:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:33.725 21:26:59 -- nvmf/common.sh@295 -- # net_devs=() 00:12:33.725 21:26:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:33.725 21:26:59 -- nvmf/common.sh@296 -- # e810=() 00:12:33.725 21:26:59 -- nvmf/common.sh@296 -- # local -ga e810 00:12:33.725 21:26:59 -- nvmf/common.sh@297 -- # x722=() 00:12:33.725 21:26:59 -- nvmf/common.sh@297 -- # local -ga x722 00:12:33.725 21:26:59 -- nvmf/common.sh@298 -- # mlx=() 00:12:33.725 21:26:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:33.725 21:26:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.725 21:26:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.725 21:26:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.725 21:26:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.725 21:26:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.725 21:26:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.725 21:26:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.725 21:26:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.725 21:26:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.725 21:26:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.725 21:26:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.725 21:26:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:33.725 21:26:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:33.725 21:26:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:33.725 21:26:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:33.725 21:26:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:33.725 21:26:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:33.725 21:26:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.725 21:26:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:33.725 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:33.725 21:26:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:33.725 21:26:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:33.725 21:26:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.725 21:26:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.725 21:26:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:33.725 21:26:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.725 21:26:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:33.725 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:33.725 21:26:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:33.725 21:26:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:33.725 21:26:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.725 21:26:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.725 21:26:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:33.725 21:26:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:33.725 21:26:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:33.725 21:26:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:33.725 21:26:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.725 21:26:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.725 21:26:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:33.725 21:26:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.725 21:26:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:33.725 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:33.725 21:26:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.725 21:26:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.725 21:26:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.726 21:26:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:33.726 21:26:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.726 21:26:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:33.726 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:33.726 21:26:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.726 21:26:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:33.726 21:26:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:33.726 21:26:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:33.726 21:26:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:33.726 21:26:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:33.726 21:26:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.726 21:26:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.726 21:26:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.726 21:26:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:33.726 21:26:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.726 21:26:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.726 21:26:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:33.726 21:26:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.726 21:26:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.726 21:26:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:33.726 21:26:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:33.726 21:26:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.726 21:26:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:33.726 21:26:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:33.726 21:26:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:33.726 21:26:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:33.726 21:26:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:33.726 21:26:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:33.726 21:26:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:33.726 21:26:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:33.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:12:33.726 00:12:33.726 --- 10.0.0.2 ping statistics --- 00:12:33.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.726 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:12:33.726 21:26:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:33.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:12:33.726 00:12:33.726 --- 10.0.0.1 ping statistics --- 00:12:33.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.726 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:12:33.726 21:26:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.726 21:26:59 -- nvmf/common.sh@411 -- # return 0 00:12:33.726 21:26:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:33.726 21:26:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.726 21:26:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:33.726 21:26:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:33.726 21:26:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.726 21:26:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:33.726 21:26:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:33.726 21:26:59 -- target/multipath.sh@45 -- # '[' -z ']' 00:12:33.726 21:26:59 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:33.726 only one NIC for nvmf test 00:12:33.726 21:26:59 -- target/multipath.sh@47 -- # nvmftestfini 00:12:33.726 21:26:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:33.726 21:26:59 -- nvmf/common.sh@117 -- # sync 00:12:33.726 21:26:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:33.726 21:26:59 -- nvmf/common.sh@120 -- # set +e 00:12:33.726 21:26:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:33.726 21:26:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:33.726 rmmod nvme_tcp 00:12:33.726 rmmod nvme_fabrics 00:12:33.726 rmmod nvme_keyring 00:12:33.984 21:26:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:33.984 21:26:59 -- nvmf/common.sh@124 -- # set -e 00:12:33.984 21:26:59 -- nvmf/common.sh@125 -- # return 0 00:12:33.984 21:26:59 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:12:33.984 21:26:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:33.984 21:26:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:33.984 21:26:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:33.984 21:26:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:33.984 21:26:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:33.984 21:26:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.984 21:26:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.984 21:26:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.885 21:27:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:35.885 21:27:01 -- target/multipath.sh@48 -- # exit 0 00:12:35.885 21:27:01 -- target/multipath.sh@1 -- # nvmftestfini 00:12:35.885 21:27:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:35.885 21:27:01 -- nvmf/common.sh@117 -- # sync 00:12:35.885 21:27:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:35.885 21:27:01 -- nvmf/common.sh@120 -- # set +e 00:12:35.885 21:27:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:35.885 21:27:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:35.885 21:27:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:35.885 21:27:01 -- nvmf/common.sh@124 -- # set -e 00:12:35.885 21:27:01 -- nvmf/common.sh@125 -- # return 0 00:12:35.885 21:27:01 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:12:35.885 21:27:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:35.885 21:27:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:35.885 21:27:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:35.885 21:27:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:35.885 21:27:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:35.885 21:27:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.885 21:27:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.885 21:27:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.885 21:27:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:35.885 00:12:35.885 real 0m4.250s 00:12:35.885 user 0m0.751s 00:12:35.885 sys 0m1.486s 00:12:35.885 21:27:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:35.885 21:27:01 -- common/autotest_common.sh@10 -- # set +x 00:12:35.885 ************************************ 00:12:35.885 END TEST nvmf_multipath 00:12:35.885 ************************************ 00:12:35.885 21:27:01 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:35.885 21:27:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:35.885 21:27:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:35.885 21:27:01 -- common/autotest_common.sh@10 -- # set +x 00:12:36.144 ************************************ 00:12:36.144 START TEST nvmf_zcopy 00:12:36.144 ************************************ 00:12:36.144 21:27:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:36.144 * Looking for test storage... 00:12:36.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.144 21:27:01 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.144 21:27:01 -- nvmf/common.sh@7 -- # uname -s 00:12:36.144 21:27:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.144 21:27:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.144 21:27:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.144 21:27:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.144 21:27:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.144 21:27:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.144 21:27:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.144 21:27:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.144 21:27:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.144 21:27:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.144 21:27:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:36.144 21:27:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:36.144 21:27:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.144 21:27:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.144 21:27:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.144 21:27:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.144 21:27:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.144 21:27:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.144 21:27:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.144 21:27:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.144 21:27:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.144 21:27:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.144 21:27:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.144 21:27:01 -- paths/export.sh@5 -- # export PATH 00:12:36.144 21:27:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.144 21:27:01 -- nvmf/common.sh@47 -- # : 0 00:12:36.144 21:27:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:36.144 21:27:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:36.144 21:27:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.144 21:27:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.144 21:27:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.144 21:27:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:36.144 21:27:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:36.144 21:27:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:36.144 21:27:01 -- target/zcopy.sh@12 -- # nvmftestinit 00:12:36.144 21:27:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:36.144 21:27:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.144 21:27:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:36.144 21:27:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:36.144 21:27:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:36.144 21:27:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.144 21:27:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.144 21:27:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.144 21:27:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:36.144 21:27:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:36.144 21:27:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:36.144 21:27:01 -- common/autotest_common.sh@10 -- # set +x 00:12:38.046 21:27:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:38.046 21:27:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:38.046 21:27:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:38.046 21:27:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:38.046 21:27:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:38.046 21:27:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:38.046 21:27:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:38.046 21:27:03 -- nvmf/common.sh@295 -- # net_devs=() 00:12:38.046 21:27:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:38.046 21:27:03 -- nvmf/common.sh@296 -- # e810=() 00:12:38.046 21:27:03 -- nvmf/common.sh@296 -- # local -ga e810 00:12:38.046 21:27:03 -- nvmf/common.sh@297 -- # x722=() 00:12:38.046 21:27:03 -- nvmf/common.sh@297 -- # local -ga x722 00:12:38.046 21:27:03 -- nvmf/common.sh@298 -- # mlx=() 00:12:38.046 21:27:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:38.046 21:27:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.046 21:27:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.046 21:27:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.046 21:27:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.046 21:27:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.046 21:27:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.046 21:27:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.046 21:27:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.046 21:27:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.046 21:27:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.046 21:27:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.046 21:27:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:38.046 21:27:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:38.046 21:27:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:38.046 21:27:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:38.046 21:27:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:38.046 21:27:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:38.046 21:27:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.046 21:27:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:38.046 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:38.046 21:27:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.046 21:27:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.046 21:27:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.046 21:27:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.046 21:27:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.046 21:27:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.046 21:27:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:38.046 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:38.046 21:27:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.046 21:27:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.046 21:27:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.046 21:27:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.046 21:27:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.046 21:27:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:38.046 21:27:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:38.046 21:27:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:38.046 21:27:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.046 21:27:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.046 21:27:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:38.046 21:27:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.046 21:27:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:38.046 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:38.046 21:27:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.046 21:27:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.046 21:27:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.046 21:27:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:38.046 21:27:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.046 21:27:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:38.046 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:38.046 21:27:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.046 21:27:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:38.046 21:27:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:38.046 21:27:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:38.046 21:27:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:38.046 21:27:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:38.046 21:27:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.046 21:27:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.046 21:27:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.046 21:27:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:38.046 21:27:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.046 21:27:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.046 21:27:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:38.046 21:27:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.046 21:27:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.046 21:27:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:38.046 21:27:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:38.046 21:27:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.046 21:27:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.046 21:27:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.046 21:27:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.046 21:27:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:38.046 21:27:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.304 21:27:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.304 21:27:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.304 21:27:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:38.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:12:38.304 00:12:38.304 --- 10.0.0.2 ping statistics --- 00:12:38.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.304 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:12:38.304 21:27:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:12:38.304 00:12:38.304 --- 10.0.0.1 ping statistics --- 00:12:38.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.304 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:12:38.304 21:27:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.304 21:27:03 -- nvmf/common.sh@411 -- # return 0 00:12:38.304 21:27:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:38.304 21:27:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.304 21:27:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:38.304 21:27:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:38.304 21:27:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.304 21:27:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:38.304 21:27:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:38.304 21:27:03 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:38.304 21:27:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:38.304 21:27:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:38.304 21:27:03 -- common/autotest_common.sh@10 -- # set +x 00:12:38.304 21:27:03 -- nvmf/common.sh@470 -- # nvmfpid=2582938 00:12:38.304 21:27:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:38.304 21:27:03 -- nvmf/common.sh@471 -- # waitforlisten 2582938 00:12:38.304 21:27:03 -- common/autotest_common.sh@817 -- # '[' -z 2582938 ']' 00:12:38.304 21:27:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.304 21:27:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:38.304 21:27:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.305 21:27:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:38.305 21:27:03 -- common/autotest_common.sh@10 -- # set +x 00:12:38.305 [2024-04-24 21:27:03.839948] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:12:38.305 [2024-04-24 21:27:03.840042] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.305 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.305 [2024-04-24 21:27:03.909354] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.563 [2024-04-24 21:27:04.022112] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.563 [2024-04-24 21:27:04.022176] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.563 [2024-04-24 21:27:04.022190] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.563 [2024-04-24 21:27:04.022210] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.563 [2024-04-24 21:27:04.022235] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.563 [2024-04-24 21:27:04.022263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.563 21:27:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:38.563 21:27:04 -- common/autotest_common.sh@850 -- # return 0 00:12:38.563 21:27:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:38.563 21:27:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:38.563 21:27:04 -- common/autotest_common.sh@10 -- # set +x 00:12:38.563 21:27:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.563 21:27:04 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:38.563 21:27:04 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:38.563 21:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.563 21:27:04 -- common/autotest_common.sh@10 -- # set +x 00:12:38.563 [2024-04-24 21:27:04.158883] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.563 21:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.563 21:27:04 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:38.563 21:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.563 21:27:04 -- common/autotest_common.sh@10 -- # set +x 00:12:38.563 21:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.563 21:27:04 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.563 21:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.563 21:27:04 -- common/autotest_common.sh@10 -- # set +x 00:12:38.563 [2024-04-24 21:27:04.175144] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.563 21:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.563 21:27:04 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:38.563 21:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.563 21:27:04 -- common/autotest_common.sh@10 -- # set +x 00:12:38.563 21:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.563 21:27:04 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:38.563 21:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.563 21:27:04 -- common/autotest_common.sh@10 -- # set +x 00:12:38.563 malloc0 00:12:38.563 21:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.563 21:27:04 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:38.563 21:27:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.563 21:27:04 -- common/autotest_common.sh@10 -- # set +x 00:12:38.563 21:27:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.563 21:27:04 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:38.563 21:27:04 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:38.563 21:27:04 -- nvmf/common.sh@521 -- # config=() 00:12:38.563 21:27:04 -- nvmf/common.sh@521 -- # local subsystem config 00:12:38.563 21:27:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:38.563 21:27:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:38.563 { 00:12:38.563 "params": { 00:12:38.563 "name": "Nvme$subsystem", 00:12:38.563 "trtype": "$TEST_TRANSPORT", 00:12:38.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:38.563 "adrfam": "ipv4", 00:12:38.563 "trsvcid": "$NVMF_PORT", 00:12:38.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:38.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:38.563 "hdgst": ${hdgst:-false}, 00:12:38.563 "ddgst": ${ddgst:-false} 00:12:38.563 }, 00:12:38.563 "method": "bdev_nvme_attach_controller" 00:12:38.563 } 00:12:38.563 EOF 00:12:38.563 )") 00:12:38.563 21:27:04 -- nvmf/common.sh@543 -- # cat 00:12:38.563 21:27:04 -- nvmf/common.sh@545 -- # jq . 00:12:38.563 21:27:04 -- nvmf/common.sh@546 -- # IFS=, 00:12:38.563 21:27:04 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:38.563 "params": { 00:12:38.563 "name": "Nvme1", 00:12:38.563 "trtype": "tcp", 00:12:38.563 "traddr": "10.0.0.2", 00:12:38.563 "adrfam": "ipv4", 00:12:38.563 "trsvcid": "4420", 00:12:38.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:38.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:38.563 "hdgst": false, 00:12:38.563 "ddgst": false 00:12:38.563 }, 00:12:38.563 "method": "bdev_nvme_attach_controller" 00:12:38.563 }' 00:12:38.822 [2024-04-24 21:27:04.255322] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:12:38.822 [2024-04-24 21:27:04.255405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2583078 ] 00:12:38.822 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.822 [2024-04-24 21:27:04.318702] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.822 [2024-04-24 21:27:04.436572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.081 Running I/O for 10 seconds... 00:12:51.311 00:12:51.311 Latency(us) 00:12:51.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.311 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:51.311 Verification LBA range: start 0x0 length 0x1000 00:12:51.311 Nvme1n1 : 10.02 5916.64 46.22 0.00 0.00 21576.89 2512.21 34369.99 00:12:51.311 =================================================================================================================== 00:12:51.311 Total : 5916.64 46.22 0.00 0.00 21576.89 2512.21 34369.99 00:12:51.311 21:27:15 -- target/zcopy.sh@39 -- # perfpid=2584782 00:12:51.311 21:27:15 -- target/zcopy.sh@41 -- # xtrace_disable 00:12:51.311 21:27:15 -- common/autotest_common.sh@10 -- # set +x 00:12:51.311 21:27:15 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:51.311 21:27:15 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:51.311 21:27:15 -- nvmf/common.sh@521 -- # config=() 00:12:51.311 21:27:15 -- nvmf/common.sh@521 -- # local subsystem config 00:12:51.311 21:27:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:51.311 21:27:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:51.311 { 00:12:51.311 "params": { 00:12:51.311 "name": "Nvme$subsystem", 00:12:51.311 "trtype": "$TEST_TRANSPORT", 00:12:51.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:51.311 "adrfam": "ipv4", 00:12:51.311 "trsvcid": "$NVMF_PORT", 00:12:51.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:51.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:51.311 "hdgst": ${hdgst:-false}, 00:12:51.311 "ddgst": ${ddgst:-false} 00:12:51.311 }, 00:12:51.311 "method": "bdev_nvme_attach_controller" 00:12:51.311 } 00:12:51.311 EOF 00:12:51.311 )") 00:12:51.311 [2024-04-24 21:27:15.072146] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.311 [2024-04-24 21:27:15.072191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.311 21:27:15 -- nvmf/common.sh@543 -- # cat 00:12:51.311 21:27:15 -- nvmf/common.sh@545 -- # jq . 00:12:51.311 21:27:15 -- nvmf/common.sh@546 -- # IFS=, 00:12:51.311 21:27:15 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:51.311 "params": { 00:12:51.311 "name": "Nvme1", 00:12:51.311 "trtype": "tcp", 00:12:51.311 "traddr": "10.0.0.2", 00:12:51.311 "adrfam": "ipv4", 00:12:51.311 "trsvcid": "4420", 00:12:51.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.311 "hdgst": false, 00:12:51.311 "ddgst": false 00:12:51.311 }, 00:12:51.311 "method": "bdev_nvme_attach_controller" 00:12:51.311 }' 00:12:51.311 [2024-04-24 21:27:15.080102] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.311 [2024-04-24 21:27:15.080130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.311 [2024-04-24 21:27:15.088116] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.311 [2024-04-24 21:27:15.088141] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.311 [2024-04-24 21:27:15.096128] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.311 [2024-04-24 21:27:15.096148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.311 [2024-04-24 21:27:15.104151] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.311 [2024-04-24 21:27:15.104171] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.311 [2024-04-24 21:27:15.109708] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:12:51.311 [2024-04-24 21:27:15.109771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2584782 ] 00:12:51.311 [2024-04-24 21:27:15.112171] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.311 [2024-04-24 21:27:15.112191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.311 [2024-04-24 21:27:15.120194] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.311 [2024-04-24 21:27:15.120215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.311 [2024-04-24 21:27:15.128215] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.311 [2024-04-24 21:27:15.128236] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.311 [2024-04-24 21:27:15.136236] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.311 [2024-04-24 21:27:15.136256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.311 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.311 [2024-04-24 21:27:15.144274] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.311 [2024-04-24 21:27:15.144299] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.311 [2024-04-24 21:27:15.152294] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.311 [2024-04-24 21:27:15.152318] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.311 [2024-04-24 21:27:15.160314] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.311 [2024-04-24 21:27:15.160340] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.311 [2024-04-24 21:27:15.168336] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.311 [2024-04-24 21:27:15.168362] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.311 [2024-04-24 21:27:15.173012] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.311 [2024-04-24 21:27:15.176359] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.311 [2024-04-24 21:27:15.176385] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.311 [2024-04-24 21:27:15.184416] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.311 [2024-04-24 21:27:15.184454] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.311 [2024-04-24 21:27:15.192409] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.311 [2024-04-24 21:27:15.192435] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.200425] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.200450] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.208447] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.208473] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.216470] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.216496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.224494] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.224520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.232512] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.232538] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.240562] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.240596] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.248575] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.248619] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.256583] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.256608] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.264603] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.264637] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.272625] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.272673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.280653] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.280690] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.288687] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.288709] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.291813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.312 [2024-04-24 21:27:15.296706] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.296727] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.304725] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.304747] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.312789] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.312821] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.320779] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.320812] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.328801] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.328834] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.336828] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.336862] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.344847] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.344882] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.352869] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.352904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.360865] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.360889] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.368895] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.368939] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.376948] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.376985] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.384974] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.385013] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.392961] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.393000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.400993] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.401018] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.409034] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.409063] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.417042] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.417071] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.425064] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.425091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.433088] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.433116] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.441108] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.441145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.449135] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.449161] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.457153] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.457178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.465176] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.465201] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.473205] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.473237] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.481227] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.481254] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.489249] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.489277] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.497268] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.497294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.505289] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.505314] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.513311] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.513336] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.521333] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.521358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.529360] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.529386] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.537383] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.537410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.545401] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.545425] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.553423] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.553448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.561448] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.561473] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.569471] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.569496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.577502] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.577530] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.585518] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.585543] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.593539] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.593564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.601562] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.601587] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.609584] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.609611] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.617610] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.617646] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.625644] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.312 [2024-04-24 21:27:15.625684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.312 [2024-04-24 21:27:15.633739] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.633766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.641751] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.641774] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 Running I/O for 5 seconds... 00:12:51.313 [2024-04-24 21:27:15.650055] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.650086] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.663549] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.663581] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.674020] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.674052] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.686426] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.686457] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.697688] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.697717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.709753] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.709782] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.723081] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.723119] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.733269] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.733300] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.745108] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.745139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.756840] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.756868] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.768789] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.768817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.780821] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.780849] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.792286] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.792317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.803897] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.803940] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.814863] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.814891] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.825258] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.825286] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.836198] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.836225] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.847117] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.847144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.857700] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.857727] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.867854] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.867881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.879031] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.879059] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.889937] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.889976] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.900464] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.900491] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.911518] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.911560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.922285] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.922312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.933225] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.933262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.943963] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.943990] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.956536] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.956564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.965925] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.965952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.977274] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.977301] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.986792] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.986819] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:15.998207] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:15.998234] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:16.008728] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:16.008755] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:16.018519] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:16.018546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:16.030028] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:16.030055] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:16.040711] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:16.040738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:16.050765] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:16.050792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:16.061414] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:16.061441] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:16.071949] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:16.071976] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:16.082474] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:16.082501] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:16.094565] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:16.094592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:16.103979] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:16.104006] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:16.115094] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:16.115121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:16.125773] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:16.125799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:16.136249] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:16.136283] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:16.147790] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:16.147818] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:16.158969] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:16.158997] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:16.169737] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:16.169764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:16.180828] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:16.180855] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:16.191798] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:16.191824] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:16.201580] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:16.201607] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:16.212608] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:16.212652] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:16.222799] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.313 [2024-04-24 21:27:16.222826] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.313 [2024-04-24 21:27:16.233383] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.233410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.246365] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.246393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.256150] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.256178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.267221] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.267249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.279312] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.279340] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.288794] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.288822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.300316] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.300349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.310850] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.310877] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.322553] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.322582] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.332686] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.332713] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.344095] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.344124] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.354569] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.354596] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.364792] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.364820] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.375077] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.375104] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.385533] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.385560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.396235] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.396263] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.408655] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.408682] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.417818] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.417846] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.428991] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.429018] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.439697] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.439724] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.450439] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.450467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.461315] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.461342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.471915] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.471942] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.482544] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.482571] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.492986] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.493013] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.503196] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.503223] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.514048] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.514075] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.524577] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.524604] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.534818] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.534845] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.545688] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.545715] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.556275] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.556302] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.566885] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.566912] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.577294] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.577329] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.588144] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.588172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.598788] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.598816] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.609663] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.609690] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.620238] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.620266] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.632666] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.632693] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.642441] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.642468] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.653328] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.653356] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.664479] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.664507] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.675212] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.675240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.685869] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.685897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.696838] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.696866] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.707951] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.707978] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.719121] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.719149] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.729893] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.729920] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.740377] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.740405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.753506] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.753534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.763167] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.763196] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.774236] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.774264] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.785064] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.785092] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.795727] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.795755] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.808749] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.808776] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.314 [2024-04-24 21:27:16.818823] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.314 [2024-04-24 21:27:16.818853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.315 [2024-04-24 21:27:16.829868] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.315 [2024-04-24 21:27:16.829897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.315 [2024-04-24 21:27:16.840302] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.315 [2024-04-24 21:27:16.840330] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.315 [2024-04-24 21:27:16.850543] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.315 [2024-04-24 21:27:16.850572] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.315 [2024-04-24 21:27:16.860976] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.315 [2024-04-24 21:27:16.861004] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.315 [2024-04-24 21:27:16.871578] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.315 [2024-04-24 21:27:16.871606] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.315 [2024-04-24 21:27:16.882739] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.315 [2024-04-24 21:27:16.882766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.315 [2024-04-24 21:27:16.893298] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.315 [2024-04-24 21:27:16.893325] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.315 [2024-04-24 21:27:16.904230] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.315 [2024-04-24 21:27:16.904258] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.315 [2024-04-24 21:27:16.915312] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.315 [2024-04-24 21:27:16.915340] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.315 [2024-04-24 21:27:16.925827] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.315 [2024-04-24 21:27:16.925854] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.315 [2024-04-24 21:27:16.938575] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.315 [2024-04-24 21:27:16.938602] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.315 [2024-04-24 21:27:16.948034] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.315 [2024-04-24 21:27:16.948069] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.315 [2024-04-24 21:27:16.959488] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.315 [2024-04-24 21:27:16.959517] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.315 [2024-04-24 21:27:16.969438] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.315 [2024-04-24 21:27:16.969465] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.315 [2024-04-24 21:27:16.980619] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.315 [2024-04-24 21:27:16.980654] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.573 [2024-04-24 21:27:16.991875] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.573 [2024-04-24 21:27:16.991902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.573 [2024-04-24 21:27:17.002928] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.573 [2024-04-24 21:27:17.002957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.573 [2024-04-24 21:27:17.013686] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.573 [2024-04-24 21:27:17.013714] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.573 [2024-04-24 21:27:17.024710] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.573 [2024-04-24 21:27:17.024738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.573 [2024-04-24 21:27:17.035408] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.573 [2024-04-24 21:27:17.035435] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.573 [2024-04-24 21:27:17.046386] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.573 [2024-04-24 21:27:17.046414] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.573 [2024-04-24 21:27:17.057284] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.573 [2024-04-24 21:27:17.057311] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.573 [2024-04-24 21:27:17.067401] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.573 [2024-04-24 21:27:17.067428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.573 [2024-04-24 21:27:17.078329] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.573 [2024-04-24 21:27:17.078358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.573 [2024-04-24 21:27:17.089385] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.573 [2024-04-24 21:27:17.089412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.573 [2024-04-24 21:27:17.100075] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.573 [2024-04-24 21:27:17.100104] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.573 [2024-04-24 21:27:17.111130] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.573 [2024-04-24 21:27:17.111159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.573 [2024-04-24 21:27:17.121156] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.573 [2024-04-24 21:27:17.121184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.573 [2024-04-24 21:27:17.132546] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.573 [2024-04-24 21:27:17.132573] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.573 [2024-04-24 21:27:17.142427] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.573 [2024-04-24 21:27:17.142454] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.573 [2024-04-24 21:27:17.153256] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.573 [2024-04-24 21:27:17.153290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.573 [2024-04-24 21:27:17.164082] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.573 [2024-04-24 21:27:17.164110] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.574 [2024-04-24 21:27:17.174812] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.574 [2024-04-24 21:27:17.174841] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.574 [2024-04-24 21:27:17.185819] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.574 [2024-04-24 21:27:17.185847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.574 [2024-04-24 21:27:17.196017] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.574 [2024-04-24 21:27:17.196045] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.574 [2024-04-24 21:27:17.207439] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.574 [2024-04-24 21:27:17.207467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.574 [2024-04-24 21:27:17.218325] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.574 [2024-04-24 21:27:17.218353] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.574 [2024-04-24 21:27:17.228480] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.574 [2024-04-24 21:27:17.228507] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.574 [2024-04-24 21:27:17.239032] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.574 [2024-04-24 21:27:17.239060] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.574 [2024-04-24 21:27:17.249890] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.574 [2024-04-24 21:27:17.249918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.832 [2024-04-24 21:27:17.260872] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.832 [2024-04-24 21:27:17.260907] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.832 [2024-04-24 21:27:17.271091] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.832 [2024-04-24 21:27:17.271118] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.832 [2024-04-24 21:27:17.281642] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.832 [2024-04-24 21:27:17.281678] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.832 [2024-04-24 21:27:17.294431] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.832 [2024-04-24 21:27:17.294459] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.832 [2024-04-24 21:27:17.303978] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.832 [2024-04-24 21:27:17.304006] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.833 [2024-04-24 21:27:17.315266] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.833 [2024-04-24 21:27:17.315293] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.833 [2024-04-24 21:27:17.326415] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.833 [2024-04-24 21:27:17.326443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.833 [2024-04-24 21:27:17.335904] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.833 [2024-04-24 21:27:17.335931] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.833 [2024-04-24 21:27:17.347341] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.833 [2024-04-24 21:27:17.347368] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.833 [2024-04-24 21:27:17.357541] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.833 [2024-04-24 21:27:17.357575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.833 [2024-04-24 21:27:17.368591] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.833 [2024-04-24 21:27:17.368618] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.833 [2024-04-24 21:27:17.378734] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.833 [2024-04-24 21:27:17.378761] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.833 [2024-04-24 21:27:17.390027] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.833 [2024-04-24 21:27:17.390055] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.833 [2024-04-24 21:27:17.400575] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.833 [2024-04-24 21:27:17.400601] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.833 [2024-04-24 21:27:17.411499] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.833 [2024-04-24 21:27:17.411528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.833 [2024-04-24 21:27:17.421977] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.833 [2024-04-24 21:27:17.422004] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.833 [2024-04-24 21:27:17.432611] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.833 [2024-04-24 21:27:17.432647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.833 [2024-04-24 21:27:17.443396] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.833 [2024-04-24 21:27:17.443423] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.833 [2024-04-24 21:27:17.454438] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.833 [2024-04-24 21:27:17.454466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.833 [2024-04-24 21:27:17.465384] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.833 [2024-04-24 21:27:17.465412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.833 [2024-04-24 21:27:17.476454] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.833 [2024-04-24 21:27:17.476481] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.833 [2024-04-24 21:27:17.486912] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.833 [2024-04-24 21:27:17.486939] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.833 [2024-04-24 21:27:17.497521] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.833 [2024-04-24 21:27:17.497548] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.833 [2024-04-24 21:27:17.508417] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.833 [2024-04-24 21:27:17.508445] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.091 [2024-04-24 21:27:17.519646] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.091 [2024-04-24 21:27:17.519674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.091 [2024-04-24 21:27:17.530271] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.091 [2024-04-24 21:27:17.530298] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.091 [2024-04-24 21:27:17.540810] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.091 [2024-04-24 21:27:17.540838] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.091 [2024-04-24 21:27:17.553543] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.091 [2024-04-24 21:27:17.553571] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.091 [2024-04-24 21:27:17.562821] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.092 [2024-04-24 21:27:17.562856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.092 [2024-04-24 21:27:17.573880] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.092 [2024-04-24 21:27:17.573908] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.092 [2024-04-24 21:27:17.584732] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.092 [2024-04-24 21:27:17.584759] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.092 [2024-04-24 21:27:17.595560] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.092 [2024-04-24 21:27:17.595588] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.092 [2024-04-24 21:27:17.606387] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.092 [2024-04-24 21:27:17.606413] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.092 [2024-04-24 21:27:17.617334] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.092 [2024-04-24 21:27:17.617361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.092 [2024-04-24 21:27:17.627658] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.092 [2024-04-24 21:27:17.627685] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.092 [2024-04-24 21:27:17.638611] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.092 [2024-04-24 21:27:17.638646] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.092 [2024-04-24 21:27:17.649476] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.092 [2024-04-24 21:27:17.649503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.092 [2024-04-24 21:27:17.660274] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.092 [2024-04-24 21:27:17.660302] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.092 [2024-04-24 21:27:17.670902] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.092 [2024-04-24 21:27:17.670930] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.092 [2024-04-24 21:27:17.681523] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.092 [2024-04-24 21:27:17.681551] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.092 [2024-04-24 21:27:17.692264] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.092 [2024-04-24 21:27:17.692292] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.092 [2024-04-24 21:27:17.702846] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.092 [2024-04-24 21:27:17.702874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.092 [2024-04-24 21:27:17.713596] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.092 [2024-04-24 21:27:17.713623] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.092 [2024-04-24 21:27:17.723900] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.092 [2024-04-24 21:27:17.723927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.092 [2024-04-24 21:27:17.734247] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.092 [2024-04-24 21:27:17.734275] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.092 [2024-04-24 21:27:17.745462] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.092 [2024-04-24 21:27:17.745490] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.092 [2024-04-24 21:27:17.756152] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.092 [2024-04-24 21:27:17.756180] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.092 [2024-04-24 21:27:17.767232] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.092 [2024-04-24 21:27:17.767268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.350 [2024-04-24 21:27:17.780306] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.350 [2024-04-24 21:27:17.780335] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.350 [2024-04-24 21:27:17.789564] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.351 [2024-04-24 21:27:17.789591] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.351 [2024-04-24 21:27:17.801112] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.351 [2024-04-24 21:27:17.801140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.351 [2024-04-24 21:27:17.812111] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.351 [2024-04-24 21:27:17.812139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.351 [2024-04-24 21:27:17.823233] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.351 [2024-04-24 21:27:17.823262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.351 [2024-04-24 21:27:17.833702] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.351 [2024-04-24 21:27:17.833730] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.351 [2024-04-24 21:27:17.844975] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.351 [2024-04-24 21:27:17.845003] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.351 [2024-04-24 21:27:17.856153] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.351 [2024-04-24 21:27:17.856180] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.351 [2024-04-24 21:27:17.866785] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.351 [2024-04-24 21:27:17.866813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.351 [2024-04-24 21:27:17.877726] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.351 [2024-04-24 21:27:17.877760] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.351 [2024-04-24 21:27:17.888783] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.351 [2024-04-24 21:27:17.888811] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.351 [2024-04-24 21:27:17.899531] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.351 [2024-04-24 21:27:17.899559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.351 [2024-04-24 21:27:17.910514] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.351 [2024-04-24 21:27:17.910541] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.351 [2024-04-24 21:27:17.921728] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.351 [2024-04-24 21:27:17.921756] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.351 [2024-04-24 21:27:17.932931] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.351 [2024-04-24 21:27:17.932958] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.351 [2024-04-24 21:27:17.944081] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.351 [2024-04-24 21:27:17.944109] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.351 [2024-04-24 21:27:17.954935] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.351 [2024-04-24 21:27:17.954963] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.351 [2024-04-24 21:27:17.966037] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.351 [2024-04-24 21:27:17.966064] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.351 [2024-04-24 21:27:17.976400] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.351 [2024-04-24 21:27:17.976427] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.351 [2024-04-24 21:27:17.987463] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.351 [2024-04-24 21:27:17.987490] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.351 [2024-04-24 21:27:17.998237] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.351 [2024-04-24 21:27:17.998265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.351 [2024-04-24 21:27:18.008984] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.351 [2024-04-24 21:27:18.009012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.351 [2024-04-24 21:27:18.019748] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.351 [2024-04-24 21:27:18.019775] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.610 [2024-04-24 21:27:18.030531] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.610 [2024-04-24 21:27:18.030560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.610 [2024-04-24 21:27:18.041460] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.610 [2024-04-24 21:27:18.041489] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.610 [2024-04-24 21:27:18.052276] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.610 [2024-04-24 21:27:18.052305] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.610 [2024-04-24 21:27:18.062508] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.610 [2024-04-24 21:27:18.062535] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.610 [2024-04-24 21:27:18.073691] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.610 [2024-04-24 21:27:18.073720] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.611 [2024-04-24 21:27:18.083679] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.611 [2024-04-24 21:27:18.083707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.611 [2024-04-24 21:27:18.095066] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.611 [2024-04-24 21:27:18.095093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.611 [2024-04-24 21:27:18.105709] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.611 [2024-04-24 21:27:18.105737] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.611 [2024-04-24 21:27:18.116553] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.611 [2024-04-24 21:27:18.116581] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.611 [2024-04-24 21:27:18.127218] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.611 [2024-04-24 21:27:18.127246] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.611 [2024-04-24 21:27:18.137884] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.611 [2024-04-24 21:27:18.137912] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.611 [2024-04-24 21:27:18.148870] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.611 [2024-04-24 21:27:18.148898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.611 [2024-04-24 21:27:18.159292] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.611 [2024-04-24 21:27:18.159319] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.611 [2024-04-24 21:27:18.172124] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.611 [2024-04-24 21:27:18.172152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.611 [2024-04-24 21:27:18.182001] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.611 [2024-04-24 21:27:18.182029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.611 [2024-04-24 21:27:18.192860] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.611 [2024-04-24 21:27:18.192898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.611 [2024-04-24 21:27:18.203485] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.611 [2024-04-24 21:27:18.203513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.611 [2024-04-24 21:27:18.213722] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.611 [2024-04-24 21:27:18.213749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.611 [2024-04-24 21:27:18.224391] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.611 [2024-04-24 21:27:18.224418] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.611 [2024-04-24 21:27:18.235272] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.611 [2024-04-24 21:27:18.235300] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.611 [2024-04-24 21:27:18.246228] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.611 [2024-04-24 21:27:18.246255] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.611 [2024-04-24 21:27:18.256981] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.611 [2024-04-24 21:27:18.257009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.611 [2024-04-24 21:27:18.267441] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.611 [2024-04-24 21:27:18.267469] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.611 [2024-04-24 21:27:18.278594] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.611 [2024-04-24 21:27:18.278622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.290202] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.290230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.301003] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.301032] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.311452] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.311483] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.321871] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.321899] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.332888] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.332916] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.343909] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.343937] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.354363] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.354390] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.365167] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.365195] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.375200] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.375228] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.386254] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.386282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.396781] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.396808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.407021] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.407048] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.418155] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.418183] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.428518] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.428546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.439063] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.439090] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.451828] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.451855] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.461181] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.461209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.471896] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.471923] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.482417] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.482444] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.494750] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.494777] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.503991] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.504020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.515000] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.515027] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.525560] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.525588] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.535557] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.535585] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.870 [2024-04-24 21:27:18.546659] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.870 [2024-04-24 21:27:18.546691] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.129 [2024-04-24 21:27:18.557363] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.129 [2024-04-24 21:27:18.557392] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.129 [2024-04-24 21:27:18.567617] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.129 [2024-04-24 21:27:18.567655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.129 [2024-04-24 21:27:18.578954] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.129 [2024-04-24 21:27:18.578982] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.129 [2024-04-24 21:27:18.589365] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.129 [2024-04-24 21:27:18.589394] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.129 [2024-04-24 21:27:18.600085] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.129 [2024-04-24 21:27:18.600113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.129 [2024-04-24 21:27:18.610944] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.129 [2024-04-24 21:27:18.610972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.129 [2024-04-24 21:27:18.621208] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.129 [2024-04-24 21:27:18.621235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.129 [2024-04-24 21:27:18.632800] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.129 [2024-04-24 21:27:18.632828] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.129 [2024-04-24 21:27:18.643264] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.129 [2024-04-24 21:27:18.643291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.129 [2024-04-24 21:27:18.654346] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.129 [2024-04-24 21:27:18.654375] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.129 [2024-04-24 21:27:18.664770] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.129 [2024-04-24 21:27:18.664797] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.129 [2024-04-24 21:27:18.675468] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.129 [2024-04-24 21:27:18.675495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.129 [2024-04-24 21:27:18.686215] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.129 [2024-04-24 21:27:18.686243] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.129 [2024-04-24 21:27:18.696758] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.129 [2024-04-24 21:27:18.696786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.129 [2024-04-24 21:27:18.707512] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.129 [2024-04-24 21:27:18.707540] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.129 [2024-04-24 21:27:18.718525] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.129 [2024-04-24 21:27:18.718554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.129 [2024-04-24 21:27:18.729076] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.129 [2024-04-24 21:27:18.729103] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.129 [2024-04-24 21:27:18.739923] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.129 [2024-04-24 21:27:18.739952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.129 [2024-04-24 21:27:18.752526] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.129 [2024-04-24 21:27:18.752554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.130 [2024-04-24 21:27:18.761936] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.130 [2024-04-24 21:27:18.761964] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.130 [2024-04-24 21:27:18.773103] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.130 [2024-04-24 21:27:18.773131] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.130 [2024-04-24 21:27:18.783676] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.130 [2024-04-24 21:27:18.783710] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.130 [2024-04-24 21:27:18.795856] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.130 [2024-04-24 21:27:18.795883] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.130 [2024-04-24 21:27:18.807227] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.130 [2024-04-24 21:27:18.807255] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:18.816896] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:18.816924] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:18.828710] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:18.828738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:18.839445] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:18.839473] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:18.849433] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:18.849461] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:18.860376] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:18.860404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:18.870989] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:18.871017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:18.881738] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:18.881766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:18.892593] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:18.892620] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:18.903034] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:18.903061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:18.916212] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:18.916239] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:18.925831] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:18.925858] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:18.937885] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:18.937912] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:18.948157] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:18.948184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:18.959185] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:18.959212] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:18.970058] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:18.970086] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:18.980872] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:18.980899] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:18.991710] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:18.991746] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:19.002428] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:19.002457] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:19.013471] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:19.013498] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:19.024230] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:19.024258] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:19.035240] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:19.035266] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:19.045350] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:19.045377] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.388 [2024-04-24 21:27:19.056759] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.388 [2024-04-24 21:27:19.056786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.067641] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.067670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.077937] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.077964] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.088643] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.088670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.099116] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.099143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.109901] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.109928] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.120513] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.120540] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.131444] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.131472] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.142137] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.142164] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.152870] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.152897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.163355] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.163383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.174017] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.174045] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.184967] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.184995] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.195137] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.195171] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.205959] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.205986] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.217099] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.217126] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.227839] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.227866] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.238504] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.238532] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.249511] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.249538] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.259772] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.259799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.270409] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.270437] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.281156] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.281184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.294068] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.294095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.303442] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.303469] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.647 [2024-04-24 21:27:19.314428] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.647 [2024-04-24 21:27:19.314456] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.326665] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.326695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.336248] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.336274] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.347527] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.347555] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.357778] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.357806] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.369343] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.369370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.380084] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.380111] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.390887] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.390913] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.401347] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.401383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.412148] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.412175] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.422669] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.422707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.433893] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.433921] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.444777] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.444804] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.455589] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.455616] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.466309] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.466336] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.478660] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.478687] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.488248] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.488274] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.498952] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.498980] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.509299] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.509326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.519717] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.519744] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.530278] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.530305] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.540972] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.541000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.551174] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.551202] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.560776] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.560803] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.571796] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.571823] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.908 [2024-04-24 21:27:19.582287] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.908 [2024-04-24 21:27:19.582315] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.592943] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.592973] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.605590] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.605618] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.615152] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.615179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.626583] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.626611] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.637618] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.637653] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.648300] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.648327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.659505] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.659532] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.670409] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.670437] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.683436] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.683464] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.695243] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.695270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.704212] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.704240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.715987] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.716015] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.728415] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.728443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.738111] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.738138] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.749323] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.749350] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.759785] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.759812] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.770326] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.770353] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.780596] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.780624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.794335] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.794364] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.803930] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.803958] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.815405] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.815433] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.825102] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.825130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.836133] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.836161] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.169 [2024-04-24 21:27:19.846535] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.169 [2024-04-24 21:27:19.846562] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:19.857272] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:19.857301] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:19.867926] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:19.867953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:19.878364] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:19.878391] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:19.889495] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:19.889522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:19.899680] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:19.899708] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:19.911182] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:19.911210] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:19.921738] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:19.921765] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:19.934452] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:19.934479] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:19.944050] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:19.944077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:19.955124] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:19.955151] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:19.965184] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:19.965211] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:19.976715] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:19.976743] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:19.987149] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:19.987177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:19.997419] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:19.997450] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:20.008321] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:20.008350] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:20.019220] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:20.019251] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:20.029722] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:20.029751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:20.039661] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:20.039697] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:20.051215] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:20.051243] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:20.061555] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:20.061583] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:20.073160] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:20.073188] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:20.083978] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:20.084019] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:20.094827] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:20.094854] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.429 [2024-04-24 21:27:20.105730] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.429 [2024-04-24 21:27:20.105757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.689 [2024-04-24 21:27:20.116107] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.689 [2024-04-24 21:27:20.116135] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.689 [2024-04-24 21:27:20.127376] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.689 [2024-04-24 21:27:20.127403] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.689 [2024-04-24 21:27:20.138184] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.689 [2024-04-24 21:27:20.138211] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.689 [2024-04-24 21:27:20.149158] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.689 [2024-04-24 21:27:20.149186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.689 [2024-04-24 21:27:20.159940] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.689 [2024-04-24 21:27:20.159968] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.689 [2024-04-24 21:27:20.170506] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.689 [2024-04-24 21:27:20.170533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.689 [2024-04-24 21:27:20.183502] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.689 [2024-04-24 21:27:20.183530] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.689 [2024-04-24 21:27:20.193242] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.689 [2024-04-24 21:27:20.193269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.689 [2024-04-24 21:27:20.204014] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.689 [2024-04-24 21:27:20.204043] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.689 [2024-04-24 21:27:20.215125] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.689 [2024-04-24 21:27:20.215155] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.689 [2024-04-24 21:27:20.226455] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.689 [2024-04-24 21:27:20.226483] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.689 [2024-04-24 21:27:20.237305] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.689 [2024-04-24 21:27:20.237334] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.689 [2024-04-24 21:27:20.248504] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.689 [2024-04-24 21:27:20.248533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.689 [2024-04-24 21:27:20.259420] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.689 [2024-04-24 21:27:20.259448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.689 [2024-04-24 21:27:20.270550] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.689 [2024-04-24 21:27:20.270578] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.689 [2024-04-24 21:27:20.281152] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.689 [2024-04-24 21:27:20.281180] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.689 [2024-04-24 21:27:20.291423] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.689 [2024-04-24 21:27:20.291451] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.689 [2024-04-24 21:27:20.302839] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.689 [2024-04-24 21:27:20.302867] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.689 [2024-04-24 21:27:20.313590] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.690 [2024-04-24 21:27:20.313618] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.690 [2024-04-24 21:27:20.326964] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.690 [2024-04-24 21:27:20.326993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.690 [2024-04-24 21:27:20.336930] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.690 [2024-04-24 21:27:20.336957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.690 [2024-04-24 21:27:20.348040] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.690 [2024-04-24 21:27:20.348067] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.690 [2024-04-24 21:27:20.358496] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.690 [2024-04-24 21:27:20.358523] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.368378] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.368406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.379398] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.379426] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.389295] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.389322] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.400695] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.400723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.410900] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.410928] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.420930] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.420964] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.431197] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.431224] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.441706] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.441733] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.454069] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.454097] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.463123] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.463152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.474419] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.474447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.484999] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.485026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.495543] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.495570] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.506178] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.506205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.517448] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.517476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.527294] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.527321] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.537936] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.537963] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.550661] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.550695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.559972] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.559999] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.571132] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.571159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.581031] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.581059] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.592000] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.592028] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.603103] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.603130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.614118] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.614146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.952 [2024-04-24 21:27:20.623881] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.952 [2024-04-24 21:27:20.623917] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.634980] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.635008] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.645305] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.645333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.655814] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.655841] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.665359] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.665385] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 00:12:55.214 Latency(us) 00:12:55.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.214 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:55.214 Nvme1n1 : 5.01 11839.22 92.49 0.00 0.00 10797.46 3835.07 25826.04 00:12:55.214 =================================================================================================================== 00:12:55.214 Total : 11839.22 92.49 0.00 0.00 10797.46 3835.07 25826.04 00:12:55.214 [2024-04-24 21:27:20.671851] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.671876] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.679869] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.679894] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.687902] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.687939] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.695980] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.696023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.703991] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.704038] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.712005] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.712047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.720027] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.720069] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.728051] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.728098] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.736087] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.736129] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.744098] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.744140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.752114] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.752161] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.760141] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.760194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.768164] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.768207] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.776195] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.776242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.784204] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.784249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.792221] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.792264] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.800246] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.800292] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.808253] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.808295] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.816226] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.816247] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.824249] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.824270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.832285] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.832311] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.840307] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.840333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.848368] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.848406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.214 [2024-04-24 21:27:20.856397] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.214 [2024-04-24 21:27:20.856440] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.215 [2024-04-24 21:27:20.864452] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.215 [2024-04-24 21:27:20.864494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.215 [2024-04-24 21:27:20.872395] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.215 [2024-04-24 21:27:20.872420] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.215 [2024-04-24 21:27:20.880418] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.215 [2024-04-24 21:27:20.880444] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.215 [2024-04-24 21:27:20.888440] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.215 [2024-04-24 21:27:20.888467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.474 [2024-04-24 21:27:20.896461] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.474 [2024-04-24 21:27:20.896487] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.474 [2024-04-24 21:27:20.904514] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.474 [2024-04-24 21:27:20.904553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.474 [2024-04-24 21:27:20.912545] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.474 [2024-04-24 21:27:20.912589] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.474 [2024-04-24 21:27:20.920563] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.474 [2024-04-24 21:27:20.920603] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.474 [2024-04-24 21:27:20.928545] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.474 [2024-04-24 21:27:20.928570] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.474 [2024-04-24 21:27:20.936567] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.474 [2024-04-24 21:27:20.936591] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2584782) - No such process 00:12:55.474 21:27:20 -- target/zcopy.sh@49 -- # wait 2584782 00:12:55.474 21:27:20 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.474 21:27:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.474 21:27:20 -- common/autotest_common.sh@10 -- # set +x 00:12:55.474 21:27:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.474 21:27:20 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:55.474 21:27:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.474 21:27:20 -- common/autotest_common.sh@10 -- # set +x 00:12:55.474 delay0 00:12:55.474 21:27:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.474 21:27:20 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:55.474 21:27:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.474 21:27:20 -- common/autotest_common.sh@10 -- # set +x 00:12:55.474 21:27:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.474 21:27:20 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:55.474 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.474 [2024-04-24 21:27:21.054827] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:02.052 Initializing NVMe Controllers 00:13:02.052 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:02.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:02.052 Initialization complete. Launching workers. 00:13:02.052 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 88 00:13:02.052 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 375, failed to submit 33 00:13:02.052 success 170, unsuccess 205, failed 0 00:13:02.052 21:27:27 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:02.052 21:27:27 -- target/zcopy.sh@60 -- # nvmftestfini 00:13:02.052 21:27:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:02.052 21:27:27 -- nvmf/common.sh@117 -- # sync 00:13:02.052 21:27:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:02.052 21:27:27 -- nvmf/common.sh@120 -- # set +e 00:13:02.052 21:27:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:02.052 21:27:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:02.052 rmmod nvme_tcp 00:13:02.052 rmmod nvme_fabrics 00:13:02.052 rmmod nvme_keyring 00:13:02.052 21:27:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:02.052 21:27:27 -- nvmf/common.sh@124 -- # set -e 00:13:02.052 21:27:27 -- nvmf/common.sh@125 -- # return 0 00:13:02.052 21:27:27 -- nvmf/common.sh@478 -- # '[' -n 2582938 ']' 00:13:02.052 21:27:27 -- nvmf/common.sh@479 -- # killprocess 2582938 00:13:02.052 21:27:27 -- common/autotest_common.sh@936 -- # '[' -z 2582938 ']' 00:13:02.052 21:27:27 -- common/autotest_common.sh@940 -- # kill -0 2582938 00:13:02.052 21:27:27 -- common/autotest_common.sh@941 -- # uname 00:13:02.052 21:27:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:02.052 21:27:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2582938 00:13:02.052 21:27:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:02.052 21:27:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:02.052 21:27:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2582938' 00:13:02.052 killing process with pid 2582938 00:13:02.052 21:27:27 -- common/autotest_common.sh@955 -- # kill 2582938 00:13:02.052 21:27:27 -- common/autotest_common.sh@960 -- # wait 2582938 00:13:02.052 21:27:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:02.052 21:27:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:02.052 21:27:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:02.052 21:27:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:02.052 21:27:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:02.052 21:27:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.052 21:27:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.052 21:27:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.954 21:27:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:03.954 00:13:03.954 real 0m28.013s 00:13:03.954 user 0m41.533s 00:13:03.954 sys 0m8.483s 00:13:03.954 21:27:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:03.954 21:27:29 -- common/autotest_common.sh@10 -- # set +x 00:13:03.954 ************************************ 00:13:03.954 END TEST nvmf_zcopy 00:13:03.954 ************************************ 00:13:04.215 21:27:29 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:04.215 21:27:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:04.215 21:27:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:04.215 21:27:29 -- common/autotest_common.sh@10 -- # set +x 00:13:04.215 ************************************ 00:13:04.215 START TEST nvmf_nmic 00:13:04.215 ************************************ 00:13:04.215 21:27:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:04.215 * Looking for test storage... 00:13:04.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.215 21:27:29 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.215 21:27:29 -- nvmf/common.sh@7 -- # uname -s 00:13:04.215 21:27:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.215 21:27:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.215 21:27:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.215 21:27:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.215 21:27:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.215 21:27:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.215 21:27:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.215 21:27:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.215 21:27:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.215 21:27:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.215 21:27:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:04.215 21:27:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:04.215 21:27:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.215 21:27:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.215 21:27:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.215 21:27:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.215 21:27:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.215 21:27:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.215 21:27:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.215 21:27:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.215 21:27:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.215 21:27:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.215 21:27:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.215 21:27:29 -- paths/export.sh@5 -- # export PATH 00:13:04.215 21:27:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.215 21:27:29 -- nvmf/common.sh@47 -- # : 0 00:13:04.215 21:27:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:04.215 21:27:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:04.215 21:27:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.215 21:27:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.215 21:27:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.215 21:27:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:04.215 21:27:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:04.215 21:27:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:04.215 21:27:29 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:04.215 21:27:29 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:04.215 21:27:29 -- target/nmic.sh@14 -- # nvmftestinit 00:13:04.215 21:27:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:04.215 21:27:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.215 21:27:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:04.215 21:27:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:04.215 21:27:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:04.215 21:27:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.215 21:27:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.215 21:27:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.215 21:27:29 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:04.215 21:27:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:04.215 21:27:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:04.215 21:27:29 -- common/autotest_common.sh@10 -- # set +x 00:13:06.118 21:27:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:06.118 21:27:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:06.118 21:27:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:06.118 21:27:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:06.118 21:27:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:06.118 21:27:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:06.118 21:27:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:06.118 21:27:31 -- nvmf/common.sh@295 -- # net_devs=() 00:13:06.118 21:27:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:06.118 21:27:31 -- nvmf/common.sh@296 -- # e810=() 00:13:06.118 21:27:31 -- nvmf/common.sh@296 -- # local -ga e810 00:13:06.118 21:27:31 -- nvmf/common.sh@297 -- # x722=() 00:13:06.118 21:27:31 -- nvmf/common.sh@297 -- # local -ga x722 00:13:06.118 21:27:31 -- nvmf/common.sh@298 -- # mlx=() 00:13:06.118 21:27:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:06.118 21:27:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.118 21:27:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.118 21:27:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.118 21:27:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.118 21:27:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.118 21:27:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.118 21:27:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.118 21:27:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.118 21:27:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.118 21:27:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.118 21:27:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.118 21:27:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:06.118 21:27:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:06.118 21:27:31 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:06.118 21:27:31 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:06.118 21:27:31 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:06.118 21:27:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:06.118 21:27:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:06.118 21:27:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:06.118 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:06.118 21:27:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:06.119 21:27:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:06.119 21:27:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.119 21:27:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.119 21:27:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:06.119 21:27:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:06.119 21:27:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:06.119 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:06.119 21:27:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:06.119 21:27:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:06.119 21:27:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.119 21:27:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.119 21:27:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:06.119 21:27:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:06.119 21:27:31 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:06.119 21:27:31 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:06.119 21:27:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:06.119 21:27:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.119 21:27:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:06.119 21:27:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.119 21:27:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:06.119 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:06.119 21:27:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.119 21:27:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:06.119 21:27:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.119 21:27:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:06.119 21:27:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.119 21:27:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:06.119 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:06.119 21:27:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.119 21:27:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:06.119 21:27:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:06.119 21:27:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:06.119 21:27:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:06.119 21:27:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:06.119 21:27:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.119 21:27:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.119 21:27:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:06.119 21:27:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:06.119 21:27:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:06.119 21:27:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:06.119 21:27:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:06.119 21:27:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:06.119 21:27:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.119 21:27:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:06.119 21:27:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:06.119 21:27:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:06.119 21:27:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:06.119 21:27:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:06.119 21:27:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:06.119 21:27:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:06.119 21:27:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:06.378 21:27:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:06.378 21:27:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:06.378 21:27:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:06.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:13:06.378 00:13:06.378 --- 10.0.0.2 ping statistics --- 00:13:06.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.378 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:13:06.378 21:27:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:06.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:13:06.378 00:13:06.378 --- 10.0.0.1 ping statistics --- 00:13:06.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.378 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:13:06.378 21:27:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.378 21:27:31 -- nvmf/common.sh@411 -- # return 0 00:13:06.378 21:27:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:06.378 21:27:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.378 21:27:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:06.378 21:27:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:06.378 21:27:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.378 21:27:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:06.378 21:27:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:06.378 21:27:31 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:06.378 21:27:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:06.378 21:27:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:06.378 21:27:31 -- common/autotest_common.sh@10 -- # set +x 00:13:06.378 21:27:31 -- nvmf/common.sh@470 -- # nvmfpid=2588165 00:13:06.378 21:27:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:06.378 21:27:31 -- nvmf/common.sh@471 -- # waitforlisten 2588165 00:13:06.378 21:27:31 -- common/autotest_common.sh@817 -- # '[' -z 2588165 ']' 00:13:06.378 21:27:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.378 21:27:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:06.378 21:27:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.378 21:27:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:06.378 21:27:31 -- common/autotest_common.sh@10 -- # set +x 00:13:06.378 [2024-04-24 21:27:31.910181] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:13:06.378 [2024-04-24 21:27:31.910266] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.378 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.378 [2024-04-24 21:27:31.979645] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:06.637 [2024-04-24 21:27:32.096854] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.637 [2024-04-24 21:27:32.096914] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.637 [2024-04-24 21:27:32.096938] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.637 [2024-04-24 21:27:32.096951] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.637 [2024-04-24 21:27:32.096962] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.637 [2024-04-24 21:27:32.097059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.637 [2024-04-24 21:27:32.097132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.637 [2024-04-24 21:27:32.097199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.637 [2024-04-24 21:27:32.097202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.204 21:27:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:07.204 21:27:32 -- common/autotest_common.sh@850 -- # return 0 00:13:07.204 21:27:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:07.204 21:27:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:07.204 21:27:32 -- common/autotest_common.sh@10 -- # set +x 00:13:07.204 21:27:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.204 21:27:32 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:07.204 21:27:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.204 21:27:32 -- common/autotest_common.sh@10 -- # set +x 00:13:07.204 [2024-04-24 21:27:32.867495] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.204 21:27:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.204 21:27:32 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:07.204 21:27:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.204 21:27:32 -- common/autotest_common.sh@10 -- # set +x 00:13:07.463 Malloc0 00:13:07.463 21:27:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.463 21:27:32 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:07.463 21:27:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.463 21:27:32 -- common/autotest_common.sh@10 -- # set +x 00:13:07.463 21:27:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.463 21:27:32 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:07.463 21:27:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.463 21:27:32 -- common/autotest_common.sh@10 -- # set +x 00:13:07.463 21:27:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.463 21:27:32 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.463 21:27:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.463 21:27:32 -- common/autotest_common.sh@10 -- # set +x 00:13:07.463 [2024-04-24 21:27:32.918935] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.463 21:27:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.463 21:27:32 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:07.463 test case1: single bdev can't be used in multiple subsystems 00:13:07.463 21:27:32 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:07.463 21:27:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.463 21:27:32 -- common/autotest_common.sh@10 -- # set +x 00:13:07.463 21:27:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.463 21:27:32 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:07.463 21:27:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.463 21:27:32 -- common/autotest_common.sh@10 -- # set +x 00:13:07.463 21:27:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.463 21:27:32 -- target/nmic.sh@28 -- # nmic_status=0 00:13:07.463 21:27:32 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:07.463 21:27:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.463 21:27:32 -- common/autotest_common.sh@10 -- # set +x 00:13:07.463 [2024-04-24 21:27:32.942774] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:07.463 [2024-04-24 21:27:32.942804] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:07.463 [2024-04-24 21:27:32.942827] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.463 request: 00:13:07.463 { 00:13:07.463 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:07.463 "namespace": { 00:13:07.463 "bdev_name": "Malloc0", 00:13:07.463 "no_auto_visible": false 00:13:07.463 }, 00:13:07.463 "method": "nvmf_subsystem_add_ns", 00:13:07.463 "req_id": 1 00:13:07.463 } 00:13:07.463 Got JSON-RPC error response 00:13:07.463 response: 00:13:07.463 { 00:13:07.463 "code": -32602, 00:13:07.463 "message": "Invalid parameters" 00:13:07.463 } 00:13:07.463 21:27:32 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:13:07.463 21:27:32 -- target/nmic.sh@29 -- # nmic_status=1 00:13:07.463 21:27:32 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:07.463 21:27:32 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:07.463 Adding namespace failed - expected result. 00:13:07.463 21:27:32 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:07.463 test case2: host connect to nvmf target in multiple paths 00:13:07.463 21:27:32 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:07.463 21:27:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.463 21:27:32 -- common/autotest_common.sh@10 -- # set +x 00:13:07.463 [2024-04-24 21:27:32.950887] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:07.463 21:27:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.463 21:27:32 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.028 21:27:33 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:08.595 21:27:34 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.595 21:27:34 -- common/autotest_common.sh@1184 -- # local i=0 00:13:08.595 21:27:34 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.595 21:27:34 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:08.595 21:27:34 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:11.122 21:27:36 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:11.122 21:27:36 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:11.122 21:27:36 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:11.122 21:27:36 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:11.122 21:27:36 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.122 21:27:36 -- common/autotest_common.sh@1194 -- # return 0 00:13:11.122 21:27:36 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:11.122 [global] 00:13:11.122 thread=1 00:13:11.122 invalidate=1 00:13:11.122 rw=write 00:13:11.122 time_based=1 00:13:11.122 runtime=1 00:13:11.122 ioengine=libaio 00:13:11.122 direct=1 00:13:11.122 bs=4096 00:13:11.122 iodepth=1 00:13:11.122 norandommap=0 00:13:11.122 numjobs=1 00:13:11.122 00:13:11.122 verify_dump=1 00:13:11.122 verify_backlog=512 00:13:11.122 verify_state_save=0 00:13:11.122 do_verify=1 00:13:11.122 verify=crc32c-intel 00:13:11.122 [job0] 00:13:11.122 filename=/dev/nvme0n1 00:13:11.122 Could not set queue depth (nvme0n1) 00:13:11.122 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:11.122 fio-3.35 00:13:11.122 Starting 1 thread 00:13:12.055 00:13:12.055 job0: (groupid=0, jobs=1): err= 0: pid=2588804: Wed Apr 24 21:27:37 2024 00:13:12.055 read: IOPS=977, BW=3908KiB/s (4002kB/s)(3912KiB/1001msec) 00:13:12.055 slat (nsec): min=7531, max=62143, avg=18693.92, stdev=8277.54 00:13:12.055 clat (usec): min=382, max=42517, avg=697.59, stdev=2439.31 00:13:12.055 lat (usec): min=400, max=42536, avg=716.28, stdev=2439.78 00:13:12.055 clat percentiles (usec): 00:13:12.055 | 1.00th=[ 441], 5.00th=[ 461], 10.00th=[ 482], 20.00th=[ 494], 00:13:12.055 | 30.00th=[ 506], 40.00th=[ 529], 50.00th=[ 545], 60.00th=[ 553], 00:13:12.055 | 70.00th=[ 553], 80.00th=[ 562], 90.00th=[ 635], 95.00th=[ 693], 00:13:12.055 | 99.00th=[ 725], 99.50th=[ 865], 99.90th=[42730], 99.95th=[42730], 00:13:12.055 | 99.99th=[42730] 00:13:12.055 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:12.055 slat (usec): min=7, max=33157, avg=45.55, stdev=1035.79 00:13:12.055 clat (usec): min=200, max=423, avg=238.37, stdev=36.34 00:13:12.055 lat (usec): min=209, max=33467, avg=283.92, stdev=1038.79 00:13:12.055 clat percentiles (usec): 00:13:12.055 | 1.00th=[ 204], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 215], 00:13:12.055 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 233], 00:13:12.055 | 70.00th=[ 245], 80.00th=[ 258], 90.00th=[ 281], 95.00th=[ 322], 00:13:12.055 | 99.00th=[ 379], 99.50th=[ 396], 99.90th=[ 420], 99.95th=[ 424], 00:13:12.055 | 99.99th=[ 424] 00:13:12.055 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:12.055 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:12.055 lat (usec) : 250=38.41%, 500=25.22%, 750=36.11%, 1000=0.05% 00:13:12.055 lat (msec) : 50=0.20% 00:13:12.055 cpu : usr=1.30%, sys=4.10%, ctx=2005, majf=0, minf=2 00:13:12.055 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:12.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:12.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:12.055 issued rwts: total=978,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:12.055 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:12.055 00:13:12.055 Run status group 0 (all jobs): 00:13:12.055 READ: bw=3908KiB/s (4002kB/s), 3908KiB/s-3908KiB/s (4002kB/s-4002kB/s), io=3912KiB (4006kB), run=1001-1001msec 00:13:12.055 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:13:12.055 00:13:12.055 Disk stats (read/write): 00:13:12.055 nvme0n1: ios=881/1024, merge=0/0, ticks=1042/231, in_queue=1273, util=98.90% 00:13:12.055 21:27:37 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:12.313 21:27:37 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.313 21:27:37 -- common/autotest_common.sh@1205 -- # local i=0 00:13:12.313 21:27:37 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:12.313 21:27:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.313 21:27:37 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:12.314 21:27:37 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.314 21:27:37 -- common/autotest_common.sh@1217 -- # return 0 00:13:12.314 21:27:37 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:12.314 21:27:37 -- target/nmic.sh@53 -- # nvmftestfini 00:13:12.314 21:27:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:12.314 21:27:37 -- nvmf/common.sh@117 -- # sync 00:13:12.314 21:27:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:12.314 21:27:37 -- nvmf/common.sh@120 -- # set +e 00:13:12.314 21:27:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:12.314 21:27:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:12.314 rmmod nvme_tcp 00:13:12.314 rmmod nvme_fabrics 00:13:12.314 rmmod nvme_keyring 00:13:12.314 21:27:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:12.314 21:27:37 -- nvmf/common.sh@124 -- # set -e 00:13:12.314 21:27:37 -- nvmf/common.sh@125 -- # return 0 00:13:12.314 21:27:37 -- nvmf/common.sh@478 -- # '[' -n 2588165 ']' 00:13:12.314 21:27:37 -- nvmf/common.sh@479 -- # killprocess 2588165 00:13:12.314 21:27:37 -- common/autotest_common.sh@936 -- # '[' -z 2588165 ']' 00:13:12.314 21:27:37 -- common/autotest_common.sh@940 -- # kill -0 2588165 00:13:12.314 21:27:37 -- common/autotest_common.sh@941 -- # uname 00:13:12.314 21:27:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:12.314 21:27:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2588165 00:13:12.314 21:27:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:12.314 21:27:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:12.314 21:27:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2588165' 00:13:12.314 killing process with pid 2588165 00:13:12.314 21:27:37 -- common/autotest_common.sh@955 -- # kill 2588165 00:13:12.314 21:27:37 -- common/autotest_common.sh@960 -- # wait 2588165 00:13:12.572 21:27:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:12.572 21:27:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:12.572 21:27:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:12.572 21:27:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:12.572 21:27:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:12.572 21:27:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.572 21:27:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.572 21:27:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.105 21:27:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:15.105 00:13:15.105 real 0m10.510s 00:13:15.105 user 0m25.202s 00:13:15.105 sys 0m2.427s 00:13:15.105 21:27:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:15.105 21:27:40 -- common/autotest_common.sh@10 -- # set +x 00:13:15.105 ************************************ 00:13:15.105 END TEST nvmf_nmic 00:13:15.105 ************************************ 00:13:15.105 21:27:40 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:15.105 21:27:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:15.105 21:27:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:15.105 21:27:40 -- common/autotest_common.sh@10 -- # set +x 00:13:15.105 ************************************ 00:13:15.105 START TEST nvmf_fio_target 00:13:15.105 ************************************ 00:13:15.105 21:27:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:15.105 * Looking for test storage... 00:13:15.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.105 21:27:40 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.105 21:27:40 -- nvmf/common.sh@7 -- # uname -s 00:13:15.105 21:27:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.105 21:27:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.105 21:27:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.105 21:27:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.105 21:27:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.105 21:27:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.105 21:27:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.105 21:27:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.105 21:27:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.105 21:27:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.105 21:27:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:15.105 21:27:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:15.105 21:27:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.105 21:27:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.105 21:27:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.105 21:27:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.105 21:27:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.105 21:27:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.105 21:27:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.105 21:27:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.106 21:27:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.106 21:27:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.106 21:27:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.106 21:27:40 -- paths/export.sh@5 -- # export PATH 00:13:15.106 21:27:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.106 21:27:40 -- nvmf/common.sh@47 -- # : 0 00:13:15.106 21:27:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:15.106 21:27:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:15.106 21:27:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.106 21:27:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.106 21:27:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.106 21:27:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:15.106 21:27:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:15.106 21:27:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:15.106 21:27:40 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:15.106 21:27:40 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:15.106 21:27:40 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:15.106 21:27:40 -- target/fio.sh@16 -- # nvmftestinit 00:13:15.106 21:27:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:15.106 21:27:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.106 21:27:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:15.106 21:27:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:15.106 21:27:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:15.106 21:27:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.106 21:27:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.106 21:27:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.106 21:27:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:15.106 21:27:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:15.106 21:27:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:15.106 21:27:40 -- common/autotest_common.sh@10 -- # set +x 00:13:17.008 21:27:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:17.008 21:27:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:17.008 21:27:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:17.008 21:27:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:17.008 21:27:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:17.008 21:27:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:17.008 21:27:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:17.008 21:27:42 -- nvmf/common.sh@295 -- # net_devs=() 00:13:17.008 21:27:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:17.008 21:27:42 -- nvmf/common.sh@296 -- # e810=() 00:13:17.008 21:27:42 -- nvmf/common.sh@296 -- # local -ga e810 00:13:17.008 21:27:42 -- nvmf/common.sh@297 -- # x722=() 00:13:17.008 21:27:42 -- nvmf/common.sh@297 -- # local -ga x722 00:13:17.008 21:27:42 -- nvmf/common.sh@298 -- # mlx=() 00:13:17.008 21:27:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:17.008 21:27:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.008 21:27:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.008 21:27:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.008 21:27:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.008 21:27:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.008 21:27:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.008 21:27:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.008 21:27:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.008 21:27:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.008 21:27:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.008 21:27:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.008 21:27:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:17.008 21:27:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:17.008 21:27:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:17.008 21:27:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:17.008 21:27:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:17.008 21:27:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:17.008 21:27:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.008 21:27:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:17.008 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:17.008 21:27:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.009 21:27:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.009 21:27:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.009 21:27:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.009 21:27:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.009 21:27:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.009 21:27:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:17.009 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:17.009 21:27:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.009 21:27:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.009 21:27:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.009 21:27:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.009 21:27:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.009 21:27:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:17.009 21:27:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:17.009 21:27:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:17.009 21:27:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.009 21:27:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.009 21:27:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:17.009 21:27:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.009 21:27:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:17.009 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:17.009 21:27:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.009 21:27:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.009 21:27:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.009 21:27:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:17.009 21:27:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.009 21:27:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:17.009 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:17.009 21:27:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.009 21:27:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:17.009 21:27:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:17.009 21:27:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:17.009 21:27:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:17.009 21:27:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:17.009 21:27:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.009 21:27:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.009 21:27:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.009 21:27:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:17.009 21:27:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.009 21:27:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.009 21:27:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:17.009 21:27:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.009 21:27:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.009 21:27:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:17.009 21:27:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:17.009 21:27:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.009 21:27:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.009 21:27:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.009 21:27:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.009 21:27:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:17.009 21:27:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.009 21:27:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.009 21:27:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.009 21:27:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:17.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:13:17.009 00:13:17.009 --- 10.0.0.2 ping statistics --- 00:13:17.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.009 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:13:17.009 21:27:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:13:17.009 00:13:17.009 --- 10.0.0.1 ping statistics --- 00:13:17.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.009 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:13:17.009 21:27:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.009 21:27:42 -- nvmf/common.sh@411 -- # return 0 00:13:17.009 21:27:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:17.009 21:27:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.009 21:27:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:17.009 21:27:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:17.009 21:27:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.009 21:27:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:17.009 21:27:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:17.009 21:27:42 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:17.009 21:27:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:17.009 21:27:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:17.009 21:27:42 -- common/autotest_common.sh@10 -- # set +x 00:13:17.009 21:27:42 -- nvmf/common.sh@470 -- # nvmfpid=2590890 00:13:17.009 21:27:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:17.009 21:27:42 -- nvmf/common.sh@471 -- # waitforlisten 2590890 00:13:17.009 21:27:42 -- common/autotest_common.sh@817 -- # '[' -z 2590890 ']' 00:13:17.009 21:27:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.009 21:27:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:17.009 21:27:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.009 21:27:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:17.009 21:27:42 -- common/autotest_common.sh@10 -- # set +x 00:13:17.009 [2024-04-24 21:27:42.658662] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:13:17.009 [2024-04-24 21:27:42.658771] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.268 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.268 [2024-04-24 21:27:42.735190] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:17.268 [2024-04-24 21:27:42.859370] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.268 [2024-04-24 21:27:42.859428] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.268 [2024-04-24 21:27:42.859444] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.268 [2024-04-24 21:27:42.859458] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.268 [2024-04-24 21:27:42.859470] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.268 [2024-04-24 21:27:42.862655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.268 [2024-04-24 21:27:42.862693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.268 [2024-04-24 21:27:42.862749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.268 [2024-04-24 21:27:42.862754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.528 21:27:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:17.528 21:27:42 -- common/autotest_common.sh@850 -- # return 0 00:13:17.528 21:27:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:17.528 21:27:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:17.528 21:27:42 -- common/autotest_common.sh@10 -- # set +x 00:13:17.528 21:27:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.528 21:27:43 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:17.786 [2024-04-24 21:27:43.263259] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.787 21:27:43 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.045 21:27:43 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:18.045 21:27:43 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.303 21:27:43 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:18.304 21:27:43 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.562 21:27:44 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:18.562 21:27:44 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.820 21:27:44 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:18.820 21:27:44 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:19.077 21:27:44 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:19.341 21:27:44 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:19.341 21:27:44 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:19.635 21:27:45 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:19.635 21:27:45 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:19.893 21:27:45 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:19.893 21:27:45 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:20.151 21:27:45 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:20.409 21:27:45 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:20.409 21:27:45 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:20.409 21:27:46 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:20.409 21:27:46 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:20.666 21:27:46 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.924 [2024-04-24 21:27:46.550839] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.924 21:27:46 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:21.181 21:27:46 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:21.439 21:27:47 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:22.006 21:27:47 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:22.006 21:27:47 -- common/autotest_common.sh@1184 -- # local i=0 00:13:22.006 21:27:47 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:22.006 21:27:47 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:13:22.006 21:27:47 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:13:22.006 21:27:47 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:24.536 21:27:49 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:24.536 21:27:49 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:24.536 21:27:49 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:24.536 21:27:49 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:13:24.536 21:27:49 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:24.536 21:27:49 -- common/autotest_common.sh@1194 -- # return 0 00:13:24.536 21:27:49 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:24.536 [global] 00:13:24.536 thread=1 00:13:24.536 invalidate=1 00:13:24.536 rw=write 00:13:24.536 time_based=1 00:13:24.536 runtime=1 00:13:24.536 ioengine=libaio 00:13:24.536 direct=1 00:13:24.536 bs=4096 00:13:24.536 iodepth=1 00:13:24.536 norandommap=0 00:13:24.536 numjobs=1 00:13:24.536 00:13:24.536 verify_dump=1 00:13:24.536 verify_backlog=512 00:13:24.536 verify_state_save=0 00:13:24.536 do_verify=1 00:13:24.536 verify=crc32c-intel 00:13:24.536 [job0] 00:13:24.536 filename=/dev/nvme0n1 00:13:24.536 [job1] 00:13:24.536 filename=/dev/nvme0n2 00:13:24.536 [job2] 00:13:24.536 filename=/dev/nvme0n3 00:13:24.536 [job3] 00:13:24.536 filename=/dev/nvme0n4 00:13:24.536 Could not set queue depth (nvme0n1) 00:13:24.536 Could not set queue depth (nvme0n2) 00:13:24.536 Could not set queue depth (nvme0n3) 00:13:24.536 Could not set queue depth (nvme0n4) 00:13:24.536 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:24.536 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:24.536 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:24.536 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:24.536 fio-3.35 00:13:24.536 Starting 4 threads 00:13:25.473 00:13:25.473 job0: (groupid=0, jobs=1): err= 0: pid=2591959: Wed Apr 24 21:27:51 2024 00:13:25.473 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:13:25.473 slat (nsec): min=7874, max=62272, avg=15295.50, stdev=8486.72 00:13:25.473 clat (usec): min=344, max=42071, avg=1220.96, stdev=5333.75 00:13:25.473 lat (usec): min=353, max=42089, avg=1236.26, stdev=5333.74 00:13:25.473 clat percentiles (usec): 00:13:25.473 | 1.00th=[ 359], 5.00th=[ 433], 10.00th=[ 445], 20.00th=[ 453], 00:13:25.473 | 30.00th=[ 465], 40.00th=[ 490], 50.00th=[ 510], 60.00th=[ 529], 00:13:25.473 | 70.00th=[ 545], 80.00th=[ 562], 90.00th=[ 586], 95.00th=[ 611], 00:13:25.473 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:13:25.473 | 99.99th=[42206] 00:13:25.473 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:25.473 slat (usec): min=9, max=18479, avg=36.05, stdev=577.03 00:13:25.473 clat (usec): min=212, max=882, avg=315.90, stdev=64.42 00:13:25.473 lat (usec): min=223, max=18897, avg=351.95, stdev=583.75 00:13:25.473 clat percentiles (usec): 00:13:25.473 | 1.00th=[ 223], 5.00th=[ 237], 10.00th=[ 251], 20.00th=[ 265], 00:13:25.473 | 30.00th=[ 277], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 310], 00:13:25.473 | 70.00th=[ 334], 80.00th=[ 367], 90.00th=[ 424], 95.00th=[ 445], 00:13:25.473 | 99.00th=[ 465], 99.50th=[ 474], 99.90th=[ 486], 99.95th=[ 881], 00:13:25.473 | 99.99th=[ 881] 00:13:25.473 bw ( KiB/s): min= 4096, max= 4096, per=22.84%, avg=4096.00, stdev= 0.00, samples=1 00:13:25.473 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:25.473 lat (usec) : 250=6.05%, 500=75.52%, 750=17.77%, 1000=0.07% 00:13:25.473 lat (msec) : 50=0.59% 00:13:25.473 cpu : usr=1.00%, sys=4.00%, ctx=1540, majf=0, minf=1 00:13:25.473 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:25.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.473 issued rwts: total=512,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.473 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:25.473 job1: (groupid=0, jobs=1): err= 0: pid=2591960: Wed Apr 24 21:27:51 2024 00:13:25.473 read: IOPS=840, BW=3361KiB/s (3441kB/s)(3364KiB/1001msec) 00:13:25.473 slat (nsec): min=7218, max=84993, avg=15967.30, stdev=7509.90 00:13:25.473 clat (usec): min=384, max=42608, avg=719.81, stdev=2842.89 00:13:25.473 lat (usec): min=393, max=42623, avg=735.78, stdev=2842.87 00:13:25.473 clat percentiles (usec): 00:13:25.473 | 1.00th=[ 400], 5.00th=[ 412], 10.00th=[ 420], 20.00th=[ 457], 00:13:25.473 | 30.00th=[ 486], 40.00th=[ 502], 50.00th=[ 519], 60.00th=[ 537], 00:13:25.473 | 70.00th=[ 562], 80.00th=[ 578], 90.00th=[ 635], 95.00th=[ 660], 00:13:25.473 | 99.00th=[ 750], 99.50th=[ 938], 99.90th=[42730], 99.95th=[42730], 00:13:25.473 | 99.99th=[42730] 00:13:25.473 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:25.473 slat (nsec): min=8964, max=76667, avg=20359.93, stdev=11122.10 00:13:25.473 clat (usec): min=246, max=595, avg=343.17, stdev=62.65 00:13:25.473 lat (usec): min=259, max=634, avg=363.53, stdev=68.05 00:13:25.473 clat percentiles (usec): 00:13:25.473 | 1.00th=[ 258], 5.00th=[ 269], 10.00th=[ 285], 20.00th=[ 302], 00:13:25.473 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 330], 60.00th=[ 334], 00:13:25.473 | 70.00th=[ 351], 80.00th=[ 388], 90.00th=[ 441], 95.00th=[ 486], 00:13:25.473 | 99.00th=[ 545], 99.50th=[ 553], 99.90th=[ 570], 99.95th=[ 594], 00:13:25.473 | 99.99th=[ 594] 00:13:25.473 bw ( KiB/s): min= 4392, max= 4392, per=24.50%, avg=4392.00, stdev= 0.00, samples=1 00:13:25.473 iops : min= 1098, max= 1098, avg=1098.00, stdev= 0.00, samples=1 00:13:25.473 lat (usec) : 250=0.32%, 500=70.56%, 750=28.58%, 1000=0.32% 00:13:25.473 lat (msec) : 50=0.21% 00:13:25.473 cpu : usr=2.70%, sys=4.30%, ctx=1865, majf=0, minf=1 00:13:25.473 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:25.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.473 issued rwts: total=841,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.473 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:25.473 job2: (groupid=0, jobs=1): err= 0: pid=2591961: Wed Apr 24 21:27:51 2024 00:13:25.473 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:13:25.473 slat (nsec): min=7186, max=59673, avg=13931.41, stdev=7358.23 00:13:25.473 clat (usec): min=369, max=2342, avg=508.23, stdev=110.36 00:13:25.473 lat (usec): min=383, max=2354, avg=522.16, stdev=112.91 00:13:25.473 clat percentiles (usec): 00:13:25.473 | 1.00th=[ 383], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 433], 00:13:25.473 | 30.00th=[ 449], 40.00th=[ 461], 50.00th=[ 474], 60.00th=[ 494], 00:13:25.473 | 70.00th=[ 537], 80.00th=[ 594], 90.00th=[ 644], 95.00th=[ 685], 00:13:25.473 | 99.00th=[ 807], 99.50th=[ 848], 99.90th=[ 914], 99.95th=[ 2343], 00:13:25.473 | 99.99th=[ 2343] 00:13:25.473 write: IOPS=1413, BW=5654KiB/s (5790kB/s)(5660KiB/1001msec); 0 zone resets 00:13:25.473 slat (nsec): min=8531, max=75487, avg=18015.46, stdev=9339.27 00:13:25.473 clat (usec): min=226, max=554, avg=302.96, stdev=47.29 00:13:25.473 lat (usec): min=236, max=597, avg=320.98, stdev=51.79 00:13:25.473 clat percentiles (usec): 00:13:25.473 | 1.00th=[ 235], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 265], 00:13:25.473 | 30.00th=[ 277], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 302], 00:13:25.473 | 70.00th=[ 314], 80.00th=[ 330], 90.00th=[ 359], 95.00th=[ 396], 00:13:25.473 | 99.00th=[ 478], 99.50th=[ 494], 99.90th=[ 553], 99.95th=[ 553], 00:13:25.473 | 99.99th=[ 553] 00:13:25.473 bw ( KiB/s): min= 5208, max= 5208, per=29.05%, avg=5208.00, stdev= 0.00, samples=1 00:13:25.473 iops : min= 1302, max= 1302, avg=1302.00, stdev= 0.00, samples=1 00:13:25.473 lat (usec) : 250=4.67%, 500=79.25%, 750=14.92%, 1000=1.11% 00:13:25.473 lat (msec) : 4=0.04% 00:13:25.473 cpu : usr=2.60%, sys=5.50%, ctx=2439, majf=0, minf=1 00:13:25.473 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:25.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.474 issued rwts: total=1024,1415,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.474 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:25.474 job3: (groupid=0, jobs=1): err= 0: pid=2591962: Wed Apr 24 21:27:51 2024 00:13:25.474 read: IOPS=859, BW=3437KiB/s (3519kB/s)(3440KiB/1001msec) 00:13:25.474 slat (nsec): min=7753, max=65089, avg=16769.63, stdev=6525.28 00:13:25.474 clat (usec): min=429, max=40887, avg=662.01, stdev=1943.69 00:13:25.474 lat (usec): min=444, max=40902, avg=678.78, stdev=1943.53 00:13:25.474 clat percentiles (usec): 00:13:25.474 | 1.00th=[ 486], 5.00th=[ 498], 10.00th=[ 506], 20.00th=[ 523], 00:13:25.474 | 30.00th=[ 537], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 578], 00:13:25.474 | 70.00th=[ 594], 80.00th=[ 611], 90.00th=[ 644], 95.00th=[ 676], 00:13:25.474 | 99.00th=[ 766], 99.50th=[ 783], 99.90th=[40633], 99.95th=[40633], 00:13:25.474 | 99.99th=[40633] 00:13:25.474 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:25.474 slat (nsec): min=7453, max=76502, avg=20725.51, stdev=12819.22 00:13:25.474 clat (usec): min=254, max=1855, avg=376.82, stdev=91.66 00:13:25.474 lat (usec): min=263, max=1878, avg=397.54, stdev=97.51 00:13:25.474 clat percentiles (usec): 00:13:25.474 | 1.00th=[ 260], 5.00th=[ 273], 10.00th=[ 289], 20.00th=[ 310], 00:13:25.474 | 30.00th=[ 318], 40.00th=[ 334], 50.00th=[ 355], 60.00th=[ 392], 00:13:25.474 | 70.00th=[ 416], 80.00th=[ 449], 90.00th=[ 478], 95.00th=[ 510], 00:13:25.474 | 99.00th=[ 611], 99.50th=[ 660], 99.90th=[ 750], 99.95th=[ 1860], 00:13:25.474 | 99.99th=[ 1860] 00:13:25.474 bw ( KiB/s): min= 4096, max= 4096, per=22.84%, avg=4096.00, stdev= 0.00, samples=1 00:13:25.474 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:25.474 lat (usec) : 500=53.72%, 750=45.65%, 1000=0.48% 00:13:25.474 lat (msec) : 2=0.05%, 50=0.11% 00:13:25.474 cpu : usr=2.80%, sys=3.60%, ctx=1885, majf=0, minf=2 00:13:25.474 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:25.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.474 issued rwts: total=860,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.474 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:25.474 00:13:25.474 Run status group 0 (all jobs): 00:13:25.474 READ: bw=12.6MiB/s (13.2MB/s), 2046KiB/s-4092KiB/s (2095kB/s-4190kB/s), io=12.6MiB (13.3MB), run=1001-1001msec 00:13:25.474 WRITE: bw=17.5MiB/s (18.4MB/s), 4092KiB/s-5654KiB/s (4190kB/s-5790kB/s), io=17.5MiB (18.4MB), run=1001-1001msec 00:13:25.474 00:13:25.474 Disk stats (read/write): 00:13:25.474 nvme0n1: ios=537/548, merge=0/0, ticks=1600/170, in_queue=1770, util=97.70% 00:13:25.474 nvme0n2: ios=746/1024, merge=0/0, ticks=473/344, in_queue=817, util=87.26% 00:13:25.474 nvme0n3: ios=937/1024, merge=0/0, ticks=469/286, in_queue=755, util=88.87% 00:13:25.474 nvme0n4: ios=666/1024, merge=0/0, ticks=662/363, in_queue=1025, util=91.22% 00:13:25.474 21:27:51 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:25.474 [global] 00:13:25.474 thread=1 00:13:25.474 invalidate=1 00:13:25.474 rw=randwrite 00:13:25.474 time_based=1 00:13:25.474 runtime=1 00:13:25.474 ioengine=libaio 00:13:25.474 direct=1 00:13:25.474 bs=4096 00:13:25.474 iodepth=1 00:13:25.474 norandommap=0 00:13:25.474 numjobs=1 00:13:25.474 00:13:25.474 verify_dump=1 00:13:25.474 verify_backlog=512 00:13:25.474 verify_state_save=0 00:13:25.474 do_verify=1 00:13:25.474 verify=crc32c-intel 00:13:25.474 [job0] 00:13:25.474 filename=/dev/nvme0n1 00:13:25.731 [job1] 00:13:25.731 filename=/dev/nvme0n2 00:13:25.731 [job2] 00:13:25.731 filename=/dev/nvme0n3 00:13:25.731 [job3] 00:13:25.732 filename=/dev/nvme0n4 00:13:25.732 Could not set queue depth (nvme0n1) 00:13:25.732 Could not set queue depth (nvme0n2) 00:13:25.732 Could not set queue depth (nvme0n3) 00:13:25.732 Could not set queue depth (nvme0n4) 00:13:25.732 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:25.732 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:25.732 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:25.732 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:25.732 fio-3.35 00:13:25.732 Starting 4 threads 00:13:27.171 00:13:27.171 job0: (groupid=0, jobs=1): err= 0: pid=2592194: Wed Apr 24 21:27:52 2024 00:13:27.171 read: IOPS=21, BW=85.3KiB/s (87.3kB/s)(88.0KiB/1032msec) 00:13:27.171 slat (nsec): min=11661, max=32792, avg=15927.45, stdev=6725.28 00:13:27.171 clat (usec): min=634, max=42053, avg=39789.21, stdev=8757.89 00:13:27.171 lat (usec): min=646, max=42066, avg=39805.13, stdev=8758.49 00:13:27.171 clat percentiles (usec): 00:13:27.171 | 1.00th=[ 635], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:27.171 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:13:27.171 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:27.171 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:27.171 | 99.99th=[42206] 00:13:27.171 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:13:27.171 slat (nsec): min=6547, max=56201, avg=18192.87, stdev=9464.22 00:13:27.171 clat (usec): min=205, max=530, avg=282.06, stdev=51.95 00:13:27.171 lat (usec): min=214, max=578, avg=300.25, stdev=53.02 00:13:27.171 clat percentiles (usec): 00:13:27.171 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 235], 00:13:27.171 | 30.00th=[ 245], 40.00th=[ 260], 50.00th=[ 273], 60.00th=[ 285], 00:13:27.171 | 70.00th=[ 306], 80.00th=[ 326], 90.00th=[ 355], 95.00th=[ 379], 00:13:27.171 | 99.00th=[ 416], 99.50th=[ 437], 99.90th=[ 529], 99.95th=[ 529], 00:13:27.171 | 99.99th=[ 529] 00:13:27.171 bw ( KiB/s): min= 4096, max= 4096, per=34.40%, avg=4096.00, stdev= 0.00, samples=1 00:13:27.171 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:27.171 lat (usec) : 250=33.71%, 500=61.99%, 750=0.37% 00:13:27.171 lat (msec) : 50=3.93% 00:13:27.171 cpu : usr=0.48%, sys=0.97%, ctx=534, majf=0, minf=1 00:13:27.171 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:27.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.171 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.171 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:27.171 job1: (groupid=0, jobs=1): err= 0: pid=2592195: Wed Apr 24 21:27:52 2024 00:13:27.171 read: IOPS=25, BW=102KiB/s (104kB/s)(104KiB/1022msec) 00:13:27.171 slat (nsec): min=9725, max=48808, avg=18944.85, stdev=11097.38 00:13:27.171 clat (usec): min=510, max=42033, avg=30401.95, stdev=18417.36 00:13:27.171 lat (usec): min=525, max=42046, avg=30420.90, stdev=18419.54 00:13:27.171 clat percentiles (usec): 00:13:27.171 | 1.00th=[ 510], 5.00th=[ 635], 10.00th=[ 644], 20.00th=[ 725], 00:13:27.171 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:27.171 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:27.171 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:27.171 | 99.99th=[42206] 00:13:27.171 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:13:27.171 slat (nsec): min=9476, max=73069, avg=26175.05, stdev=12280.49 00:13:27.171 clat (usec): min=281, max=623, avg=417.69, stdev=64.75 00:13:27.171 lat (usec): min=293, max=657, avg=443.87, stdev=65.18 00:13:27.171 clat percentiles (usec): 00:13:27.171 | 1.00th=[ 297], 5.00th=[ 318], 10.00th=[ 330], 20.00th=[ 359], 00:13:27.171 | 30.00th=[ 388], 40.00th=[ 400], 50.00th=[ 416], 60.00th=[ 429], 00:13:27.171 | 70.00th=[ 445], 80.00th=[ 469], 90.00th=[ 506], 95.00th=[ 529], 00:13:27.171 | 99.00th=[ 578], 99.50th=[ 611], 99.90th=[ 627], 99.95th=[ 627], 00:13:27.171 | 99.99th=[ 627] 00:13:27.171 bw ( KiB/s): min= 4096, max= 4096, per=34.40%, avg=4096.00, stdev= 0.00, samples=1 00:13:27.171 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:27.171 lat (usec) : 500=84.01%, 750=12.45% 00:13:27.171 lat (msec) : 50=3.53% 00:13:27.171 cpu : usr=0.59%, sys=2.06%, ctx=538, majf=0, minf=2 00:13:27.171 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:27.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.171 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.171 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:27.171 job2: (groupid=0, jobs=1): err= 0: pid=2592196: Wed Apr 24 21:27:52 2024 00:13:27.171 read: IOPS=862, BW=3448KiB/s (3531kB/s)(3476KiB/1008msec) 00:13:27.171 slat (nsec): min=6798, max=85647, avg=18771.23, stdev=8782.85 00:13:27.171 clat (usec): min=424, max=41039, avg=782.29, stdev=3349.76 00:13:27.171 lat (usec): min=437, max=41052, avg=801.06, stdev=3349.30 00:13:27.171 clat percentiles (usec): 00:13:27.171 | 1.00th=[ 441], 5.00th=[ 461], 10.00th=[ 469], 20.00th=[ 478], 00:13:27.171 | 30.00th=[ 486], 40.00th=[ 494], 50.00th=[ 498], 60.00th=[ 506], 00:13:27.171 | 70.00th=[ 515], 80.00th=[ 529], 90.00th=[ 553], 95.00th=[ 562], 00:13:27.171 | 99.00th=[ 603], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:27.171 | 99.99th=[41157] 00:13:27.171 write: IOPS=1015, BW=4063KiB/s (4161kB/s)(4096KiB/1008msec); 0 zone resets 00:13:27.171 slat (nsec): min=6476, max=59482, avg=18577.93, stdev=10292.48 00:13:27.171 clat (usec): min=213, max=1314, avg=276.51, stdev=52.50 00:13:27.171 lat (usec): min=222, max=1324, avg=295.09, stdev=55.12 00:13:27.171 clat percentiles (usec): 00:13:27.171 | 1.00th=[ 219], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 237], 00:13:27.171 | 30.00th=[ 245], 40.00th=[ 255], 50.00th=[ 269], 60.00th=[ 281], 00:13:27.171 | 70.00th=[ 293], 80.00th=[ 310], 90.00th=[ 334], 95.00th=[ 351], 00:13:27.171 | 99.00th=[ 392], 99.50th=[ 424], 99.90th=[ 523], 99.95th=[ 1319], 00:13:27.171 | 99.99th=[ 1319] 00:13:27.171 bw ( KiB/s): min= 2400, max= 5792, per=34.40%, avg=4096.00, stdev=2398.51, samples=2 00:13:27.171 iops : min= 600, max= 1448, avg=1024.00, stdev=599.63, samples=2 00:13:27.171 lat (usec) : 250=19.12%, 500=58.16%, 750=22.35% 00:13:27.171 lat (msec) : 2=0.05%, 50=0.32% 00:13:27.171 cpu : usr=2.18%, sys=3.18%, ctx=1897, majf=0, minf=1 00:13:27.171 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:27.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.171 issued rwts: total=869,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.172 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:27.172 job3: (groupid=0, jobs=1): err= 0: pid=2592197: Wed Apr 24 21:27:52 2024 00:13:27.172 read: IOPS=510, BW=2043KiB/s (2092kB/s)(2076KiB/1016msec) 00:13:27.172 slat (nsec): min=8406, max=69755, avg=23045.15, stdev=9455.26 00:13:27.172 clat (usec): min=515, max=42042, avg=1145.54, stdev=4767.92 00:13:27.172 lat (usec): min=543, max=42055, avg=1168.59, stdev=4766.67 00:13:27.172 clat percentiles (usec): 00:13:27.172 | 1.00th=[ 529], 5.00th=[ 553], 10.00th=[ 553], 20.00th=[ 562], 00:13:27.172 | 30.00th=[ 570], 40.00th=[ 578], 50.00th=[ 586], 60.00th=[ 594], 00:13:27.172 | 70.00th=[ 603], 80.00th=[ 611], 90.00th=[ 627], 95.00th=[ 644], 00:13:27.172 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:27.172 | 99.99th=[42206] 00:13:27.172 write: IOPS=1007, BW=4031KiB/s (4128kB/s)(4096KiB/1016msec); 0 zone resets 00:13:27.172 slat (nsec): min=6512, max=70093, avg=18703.31, stdev=10625.45 00:13:27.172 clat (usec): min=236, max=655, avg=372.56, stdev=80.81 00:13:27.172 lat (usec): min=248, max=699, avg=391.26, stdev=84.29 00:13:27.172 clat percentiles (usec): 00:13:27.172 | 1.00th=[ 258], 5.00th=[ 277], 10.00th=[ 293], 20.00th=[ 302], 00:13:27.172 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 351], 60.00th=[ 388], 00:13:27.172 | 70.00th=[ 400], 80.00th=[ 449], 90.00th=[ 494], 95.00th=[ 519], 00:13:27.172 | 99.00th=[ 603], 99.50th=[ 611], 99.90th=[ 644], 99.95th=[ 660], 00:13:27.172 | 99.99th=[ 660] 00:13:27.172 bw ( KiB/s): min= 4096, max= 4096, per=34.40%, avg=4096.00, stdev= 0.00, samples=2 00:13:27.172 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:13:27.172 lat (usec) : 250=0.32%, 500=60.40%, 750=38.82% 00:13:27.172 lat (msec) : 50=0.45% 00:13:27.172 cpu : usr=2.56%, sys=2.86%, ctx=1543, majf=0, minf=1 00:13:27.172 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:27.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.172 issued rwts: total=519,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.172 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:27.172 00:13:27.172 Run status group 0 (all jobs): 00:13:27.172 READ: bw=5566KiB/s (5699kB/s), 85.3KiB/s-3448KiB/s (87.3kB/s-3531kB/s), io=5744KiB (5882kB), run=1008-1032msec 00:13:27.172 WRITE: bw=11.6MiB/s (12.2MB/s), 1984KiB/s-4063KiB/s (2032kB/s-4161kB/s), io=12.0MiB (12.6MB), run=1008-1032msec 00:13:27.172 00:13:27.172 Disk stats (read/write): 00:13:27.172 nvme0n1: ios=67/512, merge=0/0, ticks=716/141, in_queue=857, util=87.68% 00:13:27.172 nvme0n2: ios=49/512, merge=0/0, ticks=605/196, in_queue=801, util=87.30% 00:13:27.172 nvme0n3: ios=888/1024, merge=0/0, ticks=1456/269, in_queue=1725, util=96.23% 00:13:27.172 nvme0n4: ios=515/1024, merge=0/0, ticks=415/371, in_queue=786, util=89.66% 00:13:27.172 21:27:52 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:27.172 [global] 00:13:27.172 thread=1 00:13:27.172 invalidate=1 00:13:27.172 rw=write 00:13:27.172 time_based=1 00:13:27.172 runtime=1 00:13:27.172 ioengine=libaio 00:13:27.172 direct=1 00:13:27.172 bs=4096 00:13:27.172 iodepth=128 00:13:27.172 norandommap=0 00:13:27.172 numjobs=1 00:13:27.172 00:13:27.172 verify_dump=1 00:13:27.172 verify_backlog=512 00:13:27.172 verify_state_save=0 00:13:27.172 do_verify=1 00:13:27.172 verify=crc32c-intel 00:13:27.172 [job0] 00:13:27.172 filename=/dev/nvme0n1 00:13:27.172 [job1] 00:13:27.172 filename=/dev/nvme0n2 00:13:27.172 [job2] 00:13:27.172 filename=/dev/nvme0n3 00:13:27.172 [job3] 00:13:27.172 filename=/dev/nvme0n4 00:13:27.172 Could not set queue depth (nvme0n1) 00:13:27.172 Could not set queue depth (nvme0n2) 00:13:27.172 Could not set queue depth (nvme0n3) 00:13:27.172 Could not set queue depth (nvme0n4) 00:13:27.172 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.172 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.172 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.172 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.172 fio-3.35 00:13:27.172 Starting 4 threads 00:13:28.547 00:13:28.547 job0: (groupid=0, jobs=1): err= 0: pid=2592426: Wed Apr 24 21:27:54 2024 00:13:28.547 read: IOPS=4544, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1014msec) 00:13:28.547 slat (usec): min=2, max=45217, avg=99.66, stdev=846.19 00:13:28.547 clat (usec): min=2271, max=66602, avg=12678.51, stdev=6455.34 00:13:28.547 lat (usec): min=3647, max=67150, avg=12778.17, stdev=6521.32 00:13:28.547 clat percentiles (usec): 00:13:28.547 | 1.00th=[ 5211], 5.00th=[ 6783], 10.00th=[ 7963], 20.00th=[ 9634], 00:13:28.547 | 30.00th=[10552], 40.00th=[11338], 50.00th=[12125], 60.00th=[12518], 00:13:28.547 | 70.00th=[13042], 80.00th=[13829], 90.00th=[15401], 95.00th=[18220], 00:13:28.547 | 99.00th=[43779], 99.50th=[59507], 99.90th=[65799], 99.95th=[66847], 00:13:28.547 | 99.99th=[66847] 00:13:28.547 write: IOPS=4966, BW=19.4MiB/s (20.3MB/s)(19.7MiB/1014msec); 0 zone resets 00:13:28.547 slat (usec): min=3, max=11311, avg=92.30, stdev=538.45 00:13:28.547 clat (usec): min=957, max=91033, avg=13955.38, stdev=10603.43 00:13:28.547 lat (usec): min=967, max=91044, avg=14047.68, stdev=10652.91 00:13:28.547 clat percentiles (usec): 00:13:28.547 | 1.00th=[ 4228], 5.00th=[ 5932], 10.00th=[ 7111], 20.00th=[ 9765], 00:13:28.547 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11863], 00:13:28.547 | 70.00th=[12780], 80.00th=[15270], 90.00th=[20579], 95.00th=[27657], 00:13:28.547 | 99.00th=[78119], 99.50th=[81265], 99.90th=[90702], 99.95th=[90702], 00:13:28.547 | 99.99th=[90702] 00:13:28.547 bw ( KiB/s): min=19352, max=19920, per=37.31%, avg=19636.00, stdev=401.64, samples=2 00:13:28.547 iops : min= 4838, max= 4980, avg=4909.00, stdev=100.41, samples=2 00:13:28.547 lat (usec) : 1000=0.03% 00:13:28.547 lat (msec) : 2=0.12%, 4=0.26%, 10=22.53%, 20=69.42%, 50=6.31% 00:13:28.547 lat (msec) : 100=1.32% 00:13:28.547 cpu : usr=5.33%, sys=7.60%, ctx=500, majf=0, minf=1 00:13:28.547 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:28.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.547 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:28.547 issued rwts: total=4608,5036,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.547 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:28.547 job1: (groupid=0, jobs=1): err= 0: pid=2592427: Wed Apr 24 21:27:54 2024 00:13:28.547 read: IOPS=2514, BW=9.82MiB/s (10.3MB/s)(10.0MiB/1018msec) 00:13:28.547 slat (usec): min=2, max=128455, avg=206.72, stdev=2867.57 00:13:28.547 clat (msec): min=5, max=147, avg=28.82, stdev=28.16 00:13:28.547 lat (msec): min=5, max=147, avg=29.02, stdev=28.28 00:13:28.547 clat percentiles (msec): 00:13:28.547 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 13], 20.00th=[ 14], 00:13:28.547 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 20], 60.00th=[ 24], 00:13:28.547 | 70.00th=[ 29], 80.00th=[ 35], 90.00th=[ 56], 95.00th=[ 68], 00:13:28.547 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 146], 00:13:28.547 | 99.99th=[ 148] 00:13:28.547 write: IOPS=2851, BW=11.1MiB/s (11.7MB/s)(11.3MiB/1018msec); 0 zone resets 00:13:28.547 slat (usec): min=3, max=38288, avg=153.69, stdev=1301.32 00:13:28.547 clat (usec): min=5227, max=55766, avg=18399.62, stdev=10397.51 00:13:28.548 lat (usec): min=5244, max=55770, avg=18553.31, stdev=10460.21 00:13:28.548 clat percentiles (usec): 00:13:28.548 | 1.00th=[ 5473], 5.00th=[ 6980], 10.00th=[ 7832], 20.00th=[ 9503], 00:13:28.548 | 30.00th=[12649], 40.00th=[14091], 50.00th=[16057], 60.00th=[17695], 00:13:28.548 | 70.00th=[19006], 80.00th=[25297], 90.00th=[36439], 95.00th=[40109], 00:13:28.548 | 99.00th=[47973], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:13:28.548 | 99.99th=[55837] 00:13:28.548 bw ( KiB/s): min= 8248, max=13960, per=21.10%, avg=11104.00, stdev=4038.99, samples=2 00:13:28.548 iops : min= 2062, max= 3490, avg=2776.00, stdev=1009.75, samples=2 00:13:28.548 lat (msec) : 10=15.63%, 20=46.84%, 50=32.45%, 100=2.75%, 250=2.32% 00:13:28.548 cpu : usr=3.05%, sys=4.52%, ctx=176, majf=0, minf=1 00:13:28.548 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:28.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:28.548 issued rwts: total=2560,2903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.548 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:28.548 job2: (groupid=0, jobs=1): err= 0: pid=2592428: Wed Apr 24 21:27:54 2024 00:13:28.548 read: IOPS=2011, BW=8047KiB/s (8240kB/s)(8192KiB/1018msec) 00:13:28.548 slat (usec): min=2, max=137442, avg=225.95, stdev=3602.14 00:13:28.548 clat (msec): min=3, max=207, avg=32.42, stdev=36.21 00:13:28.548 lat (msec): min=3, max=207, avg=32.65, stdev=36.54 00:13:28.548 clat percentiles (msec): 00:13:28.548 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 14], 00:13:28.548 | 30.00th=[ 15], 40.00th=[ 17], 50.00th=[ 20], 60.00th=[ 23], 00:13:28.548 | 70.00th=[ 29], 80.00th=[ 33], 90.00th=[ 77], 95.00th=[ 142], 00:13:28.548 | 99.00th=[ 148], 99.50th=[ 148], 99.90th=[ 148], 99.95th=[ 176], 00:13:28.548 | 99.99th=[ 207] 00:13:28.548 write: IOPS=2339, BW=9360KiB/s (9584kB/s)(9528KiB/1018msec); 0 zone resets 00:13:28.548 slat (usec): min=3, max=37032, avg=139.82, stdev=1351.12 00:13:28.548 clat (usec): min=548, max=143758, avg=26412.37, stdev=28699.04 00:13:28.548 lat (usec): min=590, max=143771, avg=26552.19, stdev=28803.49 00:13:28.548 clat percentiles (usec): 00:13:28.548 | 1.00th=[ 1254], 5.00th=[ 1926], 10.00th=[ 2704], 20.00th=[ 5932], 00:13:28.548 | 30.00th=[ 8455], 40.00th=[ 13042], 50.00th=[ 16909], 60.00th=[ 21365], 00:13:28.548 | 70.00th=[ 27919], 80.00th=[ 38011], 90.00th=[ 76022], 95.00th=[ 92799], 00:13:28.548 | 99.00th=[127402], 99.50th=[129500], 99.90th=[135267], 99.95th=[135267], 00:13:28.548 | 99.99th=[143655] 00:13:28.548 bw ( KiB/s): min= 7512, max=10520, per=17.13%, avg=9016.00, stdev=2126.98, samples=2 00:13:28.548 iops : min= 1878, max= 2630, avg=2254.00, stdev=531.74, samples=2 00:13:28.548 lat (usec) : 750=0.11%, 1000=0.16% 00:13:28.548 lat (msec) : 2=2.71%, 4=4.72%, 10=16.12%, 20=34.70%, 50=26.98% 00:13:28.548 lat (msec) : 100=8.40%, 250=6.12% 00:13:28.548 cpu : usr=1.38%, sys=3.34%, ctx=199, majf=0, minf=1 00:13:28.548 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:13:28.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:28.548 issued rwts: total=2048,2382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.548 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:28.548 job3: (groupid=0, jobs=1): err= 0: pid=2592429: Wed Apr 24 21:27:54 2024 00:13:28.548 read: IOPS=2946, BW=11.5MiB/s (12.1MB/s)(11.5MiB/1002msec) 00:13:28.548 slat (usec): min=2, max=136135, avg=183.38, stdev=2750.24 00:13:28.548 clat (usec): min=1614, max=171728, avg=24462.93, stdev=27730.30 00:13:28.548 lat (usec): min=1742, max=203730, avg=24646.31, stdev=27894.88 00:13:28.548 clat percentiles (msec): 00:13:28.548 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 11], 20.00th=[ 13], 00:13:28.548 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 19], 00:13:28.548 | 70.00th=[ 22], 80.00th=[ 28], 90.00th=[ 39], 95.00th=[ 71], 00:13:28.548 | 99.00th=[ 146], 99.50th=[ 148], 99.90th=[ 148], 99.95th=[ 150], 00:13:28.548 | 99.99th=[ 171] 00:13:28.548 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:13:28.548 slat (usec): min=4, max=34367, avg=127.86, stdev=1032.20 00:13:28.548 clat (usec): min=1020, max=73789, avg=17873.66, stdev=10300.89 00:13:28.548 lat (usec): min=1065, max=73801, avg=18001.52, stdev=10358.76 00:13:28.548 clat percentiles (usec): 00:13:28.548 | 1.00th=[ 3654], 5.00th=[ 4178], 10.00th=[ 8225], 20.00th=[11994], 00:13:28.548 | 30.00th=[13698], 40.00th=[14222], 50.00th=[14484], 60.00th=[15139], 00:13:28.548 | 70.00th=[16188], 80.00th=[25035], 90.00th=[35390], 95.00th=[41157], 00:13:28.548 | 99.00th=[46924], 99.50th=[47449], 99.90th=[73925], 99.95th=[73925], 00:13:28.548 | 99.99th=[73925] 00:13:28.548 bw ( KiB/s): min=10624, max=13952, per=23.35%, avg=12288.00, stdev=2353.25, samples=2 00:13:28.548 iops : min= 2656, max= 3488, avg=3072.00, stdev=588.31, samples=2 00:13:28.548 lat (msec) : 2=0.30%, 4=1.29%, 10=9.38%, 20=57.65%, 50=27.57% 00:13:28.548 lat (msec) : 100=1.69%, 250=2.11% 00:13:28.548 cpu : usr=4.00%, sys=5.00%, ctx=291, majf=0, minf=1 00:13:28.548 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:13:28.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:28.548 issued rwts: total=2952,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.548 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:28.548 00:13:28.548 Run status group 0 (all jobs): 00:13:28.548 READ: bw=46.7MiB/s (49.0MB/s), 8047KiB/s-17.8MiB/s (8240kB/s-18.6MB/s), io=47.5MiB (49.8MB), run=1002-1018msec 00:13:28.548 WRITE: bw=51.4MiB/s (53.9MB/s), 9360KiB/s-19.4MiB/s (9584kB/s-20.3MB/s), io=52.3MiB (54.9MB), run=1002-1018msec 00:13:28.548 00:13:28.548 Disk stats (read/write): 00:13:28.548 nvme0n1: ios=4112/4265, merge=0/0, ticks=30722/30343, in_queue=61065, util=98.70% 00:13:28.548 nvme0n2: ios=2070/2528, merge=0/0, ticks=56634/32357, in_queue=88991, util=97.76% 00:13:28.548 nvme0n3: ios=1559/1766, merge=0/0, ticks=55899/50588, in_queue=106487, util=96.88% 00:13:28.548 nvme0n4: ios=2253/2560, merge=0/0, ticks=44276/25682, in_queue=69958, util=96.85% 00:13:28.548 21:27:54 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:28.548 [global] 00:13:28.548 thread=1 00:13:28.548 invalidate=1 00:13:28.548 rw=randwrite 00:13:28.548 time_based=1 00:13:28.548 runtime=1 00:13:28.548 ioengine=libaio 00:13:28.548 direct=1 00:13:28.548 bs=4096 00:13:28.548 iodepth=128 00:13:28.548 norandommap=0 00:13:28.548 numjobs=1 00:13:28.548 00:13:28.548 verify_dump=1 00:13:28.548 verify_backlog=512 00:13:28.548 verify_state_save=0 00:13:28.548 do_verify=1 00:13:28.548 verify=crc32c-intel 00:13:28.548 [job0] 00:13:28.548 filename=/dev/nvme0n1 00:13:28.548 [job1] 00:13:28.548 filename=/dev/nvme0n2 00:13:28.548 [job2] 00:13:28.548 filename=/dev/nvme0n3 00:13:28.548 [job3] 00:13:28.548 filename=/dev/nvme0n4 00:13:28.548 Could not set queue depth (nvme0n1) 00:13:28.548 Could not set queue depth (nvme0n2) 00:13:28.548 Could not set queue depth (nvme0n3) 00:13:28.548 Could not set queue depth (nvme0n4) 00:13:28.807 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:28.807 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:28.807 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:28.807 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:28.807 fio-3.35 00:13:28.807 Starting 4 threads 00:13:30.184 00:13:30.184 job0: (groupid=0, jobs=1): err= 0: pid=2592779: Wed Apr 24 21:27:55 2024 00:13:30.184 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:13:30.184 slat (usec): min=3, max=26069, avg=240.13, stdev=1405.95 00:13:30.184 clat (usec): min=9275, max=71127, avg=28220.04, stdev=16163.97 00:13:30.184 lat (usec): min=9432, max=71169, avg=28460.16, stdev=16241.62 00:13:30.184 clat percentiles (usec): 00:13:30.184 | 1.00th=[ 9765], 5.00th=[11076], 10.00th=[11863], 20.00th=[12911], 00:13:30.184 | 30.00th=[13566], 40.00th=[15533], 50.00th=[22414], 60.00th=[31851], 00:13:30.184 | 70.00th=[37487], 80.00th=[45876], 90.00th=[51643], 95.00th=[54789], 00:13:30.184 | 99.00th=[67634], 99.50th=[68682], 99.90th=[68682], 99.95th=[70779], 00:13:30.184 | 99.99th=[70779] 00:13:30.184 write: IOPS=2396, BW=9586KiB/s (9816kB/s)(9624KiB/1004msec); 0 zone resets 00:13:30.184 slat (usec): min=3, max=77395, avg=202.66, stdev=2083.67 00:13:30.184 clat (msec): min=2, max=154, avg=23.80, stdev=19.66 00:13:30.184 lat (msec): min=4, max=154, avg=24.00, stdev=19.85 00:13:30.184 clat percentiles (msec): 00:13:30.184 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 13], 20.00th=[ 14], 00:13:30.184 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 19], 00:13:30.184 | 70.00th=[ 30], 80.00th=[ 37], 90.00th=[ 42], 95.00th=[ 46], 00:13:30.184 | 99.00th=[ 155], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 155], 00:13:30.184 | 99.99th=[ 155] 00:13:30.184 bw ( KiB/s): min= 8200, max=10040, per=15.18%, avg=9120.00, stdev=1301.08, samples=2 00:13:30.184 iops : min= 2050, max= 2510, avg=2280.00, stdev=325.27, samples=2 00:13:30.184 lat (msec) : 4=0.02%, 10=3.14%, 20=51.50%, 50=37.70%, 100=6.78% 00:13:30.184 lat (msec) : 250=0.85% 00:13:30.184 cpu : usr=2.39%, sys=4.39%, ctx=188, majf=0, minf=1 00:13:30.184 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:13:30.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:30.184 issued rwts: total=2048,2406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.184 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:30.184 job1: (groupid=0, jobs=1): err= 0: pid=2592780: Wed Apr 24 21:27:55 2024 00:13:30.184 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:13:30.184 slat (usec): min=2, max=15394, avg=85.00, stdev=674.07 00:13:30.184 clat (usec): min=580, max=30940, avg=13509.12, stdev=4279.56 00:13:30.184 lat (usec): min=613, max=30948, avg=13594.12, stdev=4314.53 00:13:30.184 clat percentiles (usec): 00:13:30.184 | 1.00th=[ 3589], 5.00th=[ 8717], 10.00th=[ 9765], 20.00th=[10683], 00:13:30.184 | 30.00th=[11076], 40.00th=[11863], 50.00th=[12518], 60.00th=[13566], 00:13:30.184 | 70.00th=[14877], 80.00th=[15664], 90.00th=[18744], 95.00th=[22152], 00:13:30.184 | 99.00th=[28705], 99.50th=[30016], 99.90th=[31065], 99.95th=[31065], 00:13:30.184 | 99.99th=[31065] 00:13:30.184 write: IOPS=5011, BW=19.6MiB/s (20.5MB/s)(19.7MiB/1008msec); 0 zone resets 00:13:30.184 slat (usec): min=3, max=17878, avg=88.90, stdev=684.76 00:13:30.184 clat (usec): min=1039, max=44412, avg=12891.50, stdev=7426.94 00:13:30.184 lat (usec): min=1172, max=44457, avg=12980.40, stdev=7466.74 00:13:30.184 clat percentiles (usec): 00:13:30.184 | 1.00th=[ 2573], 5.00th=[ 4883], 10.00th=[ 6718], 20.00th=[ 7767], 00:13:30.184 | 30.00th=[ 8848], 40.00th=[ 9765], 50.00th=[10814], 60.00th=[11600], 00:13:30.184 | 70.00th=[13829], 80.00th=[16057], 90.00th=[24249], 95.00th=[30802], 00:13:30.184 | 99.00th=[38011], 99.50th=[38011], 99.90th=[39584], 99.95th=[39584], 00:13:30.184 | 99.99th=[44303] 00:13:30.184 bw ( KiB/s): min=17600, max=21792, per=32.79%, avg=19696.00, stdev=2964.19, samples=2 00:13:30.184 iops : min= 4400, max= 5448, avg=4924.00, stdev=741.05, samples=2 00:13:30.184 lat (usec) : 750=0.01%, 1000=0.01% 00:13:30.184 lat (msec) : 2=0.68%, 4=1.05%, 10=26.25%, 20=61.08%, 50=10.92% 00:13:30.184 cpu : usr=3.87%, sys=6.16%, ctx=355, majf=0, minf=1 00:13:30.184 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:30.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:30.184 issued rwts: total=4608,5052,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.184 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:30.184 job2: (groupid=0, jobs=1): err= 0: pid=2592781: Wed Apr 24 21:27:55 2024 00:13:30.184 read: IOPS=3816, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1005msec) 00:13:30.184 slat (usec): min=3, max=13828, avg=122.50, stdev=698.48 00:13:30.184 clat (usec): min=722, max=59991, avg=15358.76, stdev=7159.51 00:13:30.184 lat (usec): min=5850, max=60028, avg=15481.25, stdev=7201.43 00:13:30.184 clat percentiles (usec): 00:13:30.184 | 1.00th=[ 6128], 5.00th=[11076], 10.00th=[11731], 20.00th=[12649], 00:13:30.184 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13960], 60.00th=[14484], 00:13:30.184 | 70.00th=[14877], 80.00th=[15664], 90.00th=[16450], 95.00th=[21103], 00:13:30.184 | 99.00th=[57410], 99.50th=[57410], 99.90th=[57410], 99.95th=[57410], 00:13:30.184 | 99.99th=[60031] 00:13:30.184 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:13:30.184 slat (usec): min=3, max=30779, avg=120.40, stdev=776.02 00:13:30.184 clat (usec): min=6255, max=54008, avg=16637.11, stdev=7762.39 00:13:30.184 lat (usec): min=6266, max=54023, avg=16757.51, stdev=7816.09 00:13:30.184 clat percentiles (usec): 00:13:30.184 | 1.00th=[ 8979], 5.00th=[10421], 10.00th=[10945], 20.00th=[12387], 00:13:30.184 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13698], 60.00th=[14746], 00:13:30.184 | 70.00th=[15795], 80.00th=[19792], 90.00th=[27919], 95.00th=[34866], 00:13:30.184 | 99.00th=[48497], 99.50th=[48497], 99.90th=[53740], 99.95th=[53740], 00:13:30.184 | 99.99th=[54264] 00:13:30.184 bw ( KiB/s): min=16384, max=16384, per=27.27%, avg=16384.00, stdev= 0.00, samples=2 00:13:30.184 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:13:30.184 lat (usec) : 750=0.01% 00:13:30.184 lat (msec) : 10=2.03%, 20=85.24%, 50=11.51%, 100=1.21% 00:13:30.184 cpu : usr=5.58%, sys=7.67%, ctx=446, majf=0, minf=1 00:13:30.184 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:30.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:30.184 issued rwts: total=3836,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.184 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:30.184 job3: (groupid=0, jobs=1): err= 0: pid=2592782: Wed Apr 24 21:27:55 2024 00:13:30.184 read: IOPS=3097, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1007msec) 00:13:30.184 slat (usec): min=3, max=16068, avg=132.32, stdev=815.82 00:13:30.184 clat (usec): min=5750, max=37475, avg=16627.21, stdev=4464.30 00:13:30.184 lat (usec): min=6800, max=44015, avg=16759.53, stdev=4518.27 00:13:30.184 clat percentiles (usec): 00:13:30.184 | 1.00th=[ 8717], 5.00th=[10683], 10.00th=[12125], 20.00th=[13304], 00:13:30.184 | 30.00th=[14091], 40.00th=[15008], 50.00th=[16057], 60.00th=[16909], 00:13:30.185 | 70.00th=[17695], 80.00th=[19792], 90.00th=[21627], 95.00th=[24511], 00:13:30.185 | 99.00th=[32637], 99.50th=[32637], 99.90th=[37487], 99.95th=[37487], 00:13:30.185 | 99.99th=[37487] 00:13:30.185 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:13:30.185 slat (usec): min=4, max=123995, avg=155.24, stdev=2226.60 00:13:30.185 clat (msec): min=6, max=137, avg=18.98, stdev=16.48 00:13:30.185 lat (msec): min=6, max=145, avg=19.14, stdev=16.60 00:13:30.185 clat percentiles (msec): 00:13:30.185 | 1.00th=[ 9], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 14], 00:13:30.185 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 16], 60.00th=[ 17], 00:13:30.185 | 70.00th=[ 19], 80.00th=[ 22], 90.00th=[ 24], 95.00th=[ 27], 00:13:30.185 | 99.00th=[ 138], 99.50th=[ 138], 99.90th=[ 138], 99.95th=[ 138], 00:13:30.185 | 99.99th=[ 138] 00:13:30.185 bw ( KiB/s): min=12288, max=15736, per=23.33%, avg=14012.00, stdev=2438.10, samples=2 00:13:30.185 iops : min= 3072, max= 3934, avg=3503.00, stdev=609.53, samples=2 00:13:30.185 lat (msec) : 10=2.48%, 20=76.29%, 50=20.27%, 250=0.95% 00:13:30.185 cpu : usr=3.88%, sys=6.46%, ctx=245, majf=0, minf=1 00:13:30.185 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:30.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:30.185 issued rwts: total=3119,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.185 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:30.185 00:13:30.185 Run status group 0 (all jobs): 00:13:30.185 READ: bw=52.7MiB/s (55.3MB/s), 8159KiB/s-17.9MiB/s (8355kB/s-18.7MB/s), io=53.2MiB (55.8MB), run=1004-1008msec 00:13:30.185 WRITE: bw=58.7MiB/s (61.5MB/s), 9586KiB/s-19.6MiB/s (9816kB/s-20.5MB/s), io=59.1MiB (62.0MB), run=1004-1008msec 00:13:30.185 00:13:30.185 Disk stats (read/write): 00:13:30.185 nvme0n1: ios=1482/1536, merge=0/0, ticks=14492/13990, in_queue=28482, util=88.98% 00:13:30.185 nvme0n2: ios=3606/4093, merge=0/0, ticks=43003/42179, in_queue=85182, util=98.15% 00:13:30.185 nvme0n3: ios=3094/3234, merge=0/0, ticks=16632/20523, in_queue=37155, util=96.43% 00:13:30.185 nvme0n4: ios=2503/2560, merge=0/0, ticks=22262/27576, in_queue=49838, util=96.16% 00:13:30.185 21:27:55 -- target/fio.sh@55 -- # sync 00:13:30.185 21:27:55 -- target/fio.sh@59 -- # fio_pid=2592918 00:13:30.185 21:27:55 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:30.185 21:27:55 -- target/fio.sh@61 -- # sleep 3 00:13:30.185 [global] 00:13:30.185 thread=1 00:13:30.185 invalidate=1 00:13:30.185 rw=read 00:13:30.185 time_based=1 00:13:30.185 runtime=10 00:13:30.185 ioengine=libaio 00:13:30.185 direct=1 00:13:30.185 bs=4096 00:13:30.185 iodepth=1 00:13:30.185 norandommap=1 00:13:30.185 numjobs=1 00:13:30.185 00:13:30.185 [job0] 00:13:30.185 filename=/dev/nvme0n1 00:13:30.185 [job1] 00:13:30.185 filename=/dev/nvme0n2 00:13:30.185 [job2] 00:13:30.185 filename=/dev/nvme0n3 00:13:30.185 [job3] 00:13:30.185 filename=/dev/nvme0n4 00:13:30.185 Could not set queue depth (nvme0n1) 00:13:30.185 Could not set queue depth (nvme0n2) 00:13:30.185 Could not set queue depth (nvme0n3) 00:13:30.185 Could not set queue depth (nvme0n4) 00:13:30.185 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.185 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.185 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.185 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.185 fio-3.35 00:13:30.185 Starting 4 threads 00:13:33.468 21:27:58 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:33.468 21:27:58 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:33.468 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=29315072, buflen=4096 00:13:33.468 fio: pid=2593009, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:33.468 21:27:59 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:33.468 21:27:59 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:33.468 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=319488, buflen=4096 00:13:33.468 fio: pid=2593008, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:33.726 21:27:59 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:33.726 21:27:59 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:33.726 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=4448256, buflen=4096 00:13:33.726 fio: pid=2593006, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:33.984 21:27:59 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:33.984 21:27:59 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:33.984 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=360448, buflen=4096 00:13:33.984 fio: pid=2593007, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:34.242 00:13:34.242 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2593006: Wed Apr 24 21:27:59 2024 00:13:34.242 read: IOPS=319, BW=1279KiB/s (1309kB/s)(4344KiB/3397msec) 00:13:34.242 slat (nsec): min=4378, max=52940, avg=10929.21, stdev=7308.81 00:13:34.242 clat (usec): min=310, max=42062, avg=3093.20, stdev=10232.25 00:13:34.242 lat (usec): min=320, max=42078, avg=3104.09, stdev=10234.01 00:13:34.242 clat percentiles (usec): 00:13:34.242 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 330], 00:13:34.242 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 367], 60.00th=[ 375], 00:13:34.242 | 70.00th=[ 392], 80.00th=[ 412], 90.00th=[ 449], 95.00th=[41157], 00:13:34.242 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:34.242 | 99.99th=[42206] 00:13:34.242 bw ( KiB/s): min= 96, max= 4824, per=15.66%, avg=1436.00, stdev=1843.87, samples=6 00:13:34.242 iops : min= 24, max= 1206, avg=359.00, stdev=460.97, samples=6 00:13:34.242 lat (usec) : 500=92.00%, 750=1.29% 00:13:34.242 lat (msec) : 50=6.62% 00:13:34.242 cpu : usr=0.06%, sys=0.50%, ctx=1089, majf=0, minf=1 00:13:34.242 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.242 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.242 issued rwts: total=1087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.242 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.242 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2593007: Wed Apr 24 21:27:59 2024 00:13:34.242 read: IOPS=24, BW=95.9KiB/s (98.2kB/s)(352KiB/3669msec) 00:13:34.242 slat (usec): min=12, max=12825, avg=219.31, stdev=1444.28 00:13:34.242 clat (usec): min=737, max=67150, avg=41214.88, stdev=5343.70 00:13:34.242 lat (usec): min=767, max=67163, avg=41436.28, stdev=5571.40 00:13:34.242 clat percentiles (usec): 00:13:34.242 | 1.00th=[ 742], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:34.242 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:34.242 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[43254], 00:13:34.242 | 99.00th=[67634], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:13:34.242 | 99.99th=[67634] 00:13:34.242 bw ( KiB/s): min= 93, max= 104, per=1.05%, avg=96.71, stdev= 3.40, samples=7 00:13:34.243 iops : min= 23, max= 26, avg=24.14, stdev= 0.90, samples=7 00:13:34.243 lat (usec) : 750=1.12% 00:13:34.243 lat (msec) : 50=95.51%, 100=2.25% 00:13:34.243 cpu : usr=0.11%, sys=0.00%, ctx=91, majf=0, minf=1 00:13:34.243 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.243 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.243 issued rwts: total=89,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.243 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.243 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2593008: Wed Apr 24 21:27:59 2024 00:13:34.243 read: IOPS=25, BW=99.2KiB/s (102kB/s)(312KiB/3144msec) 00:13:34.243 slat (usec): min=11, max=2835, avg=55.38, stdev=316.91 00:13:34.243 clat (usec): min=642, max=41623, avg=39960.87, stdev=6402.51 00:13:34.243 lat (usec): min=677, max=43984, avg=40016.51, stdev=6415.11 00:13:34.243 clat percentiles (usec): 00:13:34.243 | 1.00th=[ 644], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:13:34.243 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:34.243 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:34.243 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:34.243 | 99.99th=[41681] 00:13:34.243 bw ( KiB/s): min= 96, max= 104, per=1.09%, avg=100.00, stdev= 4.38, samples=6 00:13:34.243 iops : min= 24, max= 26, avg=25.00, stdev= 1.10, samples=6 00:13:34.243 lat (usec) : 750=1.27%, 1000=1.27% 00:13:34.243 lat (msec) : 50=96.20% 00:13:34.243 cpu : usr=0.10%, sys=0.00%, ctx=80, majf=0, minf=1 00:13:34.243 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.243 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.243 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.243 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.243 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2593009: Wed Apr 24 21:27:59 2024 00:13:34.243 read: IOPS=2480, BW=9920KiB/s (10.2MB/s)(28.0MiB/2886msec) 00:13:34.243 slat (nsec): min=4428, max=63642, avg=11232.41, stdev=7651.61 00:13:34.243 clat (usec): min=315, max=884, avg=385.28, stdev=42.63 00:13:34.243 lat (usec): min=325, max=910, avg=396.51, stdev=45.92 00:13:34.243 clat percentiles (usec): 00:13:34.243 | 1.00th=[ 326], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 351], 00:13:34.243 | 30.00th=[ 359], 40.00th=[ 371], 50.00th=[ 379], 60.00th=[ 392], 00:13:34.243 | 70.00th=[ 400], 80.00th=[ 408], 90.00th=[ 429], 95.00th=[ 465], 00:13:34.243 | 99.00th=[ 545], 99.50th=[ 570], 99.90th=[ 652], 99.95th=[ 758], 00:13:34.243 | 99.99th=[ 889] 00:13:34.243 bw ( KiB/s): min= 9696, max=10808, per=100.00%, avg=10084.80, stdev=444.51, samples=5 00:13:34.243 iops : min= 2424, max= 2702, avg=2521.20, stdev=111.13, samples=5 00:13:34.243 lat (usec) : 500=97.88%, 750=2.05%, 1000=0.06% 00:13:34.243 cpu : usr=1.53%, sys=4.19%, ctx=7158, majf=0, minf=1 00:13:34.243 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.243 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.243 issued rwts: total=7158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.243 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.243 00:13:34.243 Run status group 0 (all jobs): 00:13:34.243 READ: bw=9168KiB/s (9388kB/s), 95.9KiB/s-9920KiB/s (98.2kB/s-10.2MB/s), io=32.8MiB (34.4MB), run=2886-3669msec 00:13:34.243 00:13:34.243 Disk stats (read/write): 00:13:34.243 nvme0n1: ios=1085/0, merge=0/0, ticks=3321/0, in_queue=3321, util=95.94% 00:13:34.243 nvme0n2: ios=100/0, merge=0/0, ticks=3739/0, in_queue=3739, util=98.04% 00:13:34.243 nvme0n3: ios=105/0, merge=0/0, ticks=3162/0, in_queue=3162, util=97.16% 00:13:34.243 nvme0n4: ios=7143/0, merge=0/0, ticks=2633/0, in_queue=2633, util=96.75% 00:13:34.243 21:27:59 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:34.243 21:27:59 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:34.506 21:28:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:34.507 21:28:00 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:34.786 21:28:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:34.786 21:28:00 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:35.044 21:28:00 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:35.044 21:28:00 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:35.302 21:28:00 -- target/fio.sh@69 -- # fio_status=0 00:13:35.302 21:28:00 -- target/fio.sh@70 -- # wait 2592918 00:13:35.302 21:28:00 -- target/fio.sh@70 -- # fio_status=4 00:13:35.302 21:28:00 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:35.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.560 21:28:01 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:35.560 21:28:01 -- common/autotest_common.sh@1205 -- # local i=0 00:13:35.560 21:28:01 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:35.560 21:28:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.560 21:28:01 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:35.560 21:28:01 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.560 21:28:01 -- common/autotest_common.sh@1217 -- # return 0 00:13:35.560 21:28:01 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:35.560 21:28:01 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:35.560 nvmf hotplug test: fio failed as expected 00:13:35.560 21:28:01 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:35.820 21:28:01 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:35.820 21:28:01 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:35.820 21:28:01 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:35.820 21:28:01 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:35.820 21:28:01 -- target/fio.sh@91 -- # nvmftestfini 00:13:35.820 21:28:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:35.820 21:28:01 -- nvmf/common.sh@117 -- # sync 00:13:35.820 21:28:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:35.820 21:28:01 -- nvmf/common.sh@120 -- # set +e 00:13:35.820 21:28:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:35.820 21:28:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:35.820 rmmod nvme_tcp 00:13:35.820 rmmod nvme_fabrics 00:13:35.820 rmmod nvme_keyring 00:13:35.820 21:28:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:35.820 21:28:01 -- nvmf/common.sh@124 -- # set -e 00:13:35.820 21:28:01 -- nvmf/common.sh@125 -- # return 0 00:13:35.820 21:28:01 -- nvmf/common.sh@478 -- # '[' -n 2590890 ']' 00:13:35.820 21:28:01 -- nvmf/common.sh@479 -- # killprocess 2590890 00:13:35.820 21:28:01 -- common/autotest_common.sh@936 -- # '[' -z 2590890 ']' 00:13:35.820 21:28:01 -- common/autotest_common.sh@940 -- # kill -0 2590890 00:13:35.820 21:28:01 -- common/autotest_common.sh@941 -- # uname 00:13:35.820 21:28:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:35.820 21:28:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2590890 00:13:35.820 21:28:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:35.820 21:28:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:35.820 21:28:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2590890' 00:13:35.820 killing process with pid 2590890 00:13:35.820 21:28:01 -- common/autotest_common.sh@955 -- # kill 2590890 00:13:35.820 21:28:01 -- common/autotest_common.sh@960 -- # wait 2590890 00:13:36.079 21:28:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:36.079 21:28:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:36.079 21:28:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:36.079 21:28:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:36.079 21:28:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:36.079 21:28:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.079 21:28:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.079 21:28:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.616 21:28:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:38.616 00:13:38.616 real 0m23.300s 00:13:38.616 user 1m21.047s 00:13:38.616 sys 0m6.239s 00:13:38.616 21:28:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:38.616 21:28:03 -- common/autotest_common.sh@10 -- # set +x 00:13:38.616 ************************************ 00:13:38.616 END TEST nvmf_fio_target 00:13:38.616 ************************************ 00:13:38.616 21:28:03 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:38.616 21:28:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:38.616 21:28:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:38.616 21:28:03 -- common/autotest_common.sh@10 -- # set +x 00:13:38.616 ************************************ 00:13:38.616 START TEST nvmf_bdevio 00:13:38.616 ************************************ 00:13:38.616 21:28:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:38.616 * Looking for test storage... 00:13:38.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:38.616 21:28:03 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:38.616 21:28:03 -- nvmf/common.sh@7 -- # uname -s 00:13:38.616 21:28:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.616 21:28:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.616 21:28:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.616 21:28:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.616 21:28:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.616 21:28:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.616 21:28:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.616 21:28:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.616 21:28:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.616 21:28:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.616 21:28:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:38.616 21:28:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:38.616 21:28:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.616 21:28:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.616 21:28:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:38.616 21:28:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:38.616 21:28:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:38.616 21:28:03 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.616 21:28:03 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.616 21:28:03 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.616 21:28:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.616 21:28:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.616 21:28:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.616 21:28:03 -- paths/export.sh@5 -- # export PATH 00:13:38.616 21:28:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.616 21:28:03 -- nvmf/common.sh@47 -- # : 0 00:13:38.616 21:28:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:38.616 21:28:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:38.616 21:28:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:38.616 21:28:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.616 21:28:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.616 21:28:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:38.616 21:28:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:38.616 21:28:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:38.616 21:28:03 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:38.616 21:28:03 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:38.616 21:28:03 -- target/bdevio.sh@14 -- # nvmftestinit 00:13:38.616 21:28:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:38.617 21:28:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.617 21:28:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:38.617 21:28:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:38.617 21:28:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:38.617 21:28:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.617 21:28:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.617 21:28:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.617 21:28:03 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:38.617 21:28:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:38.617 21:28:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:38.617 21:28:03 -- common/autotest_common.sh@10 -- # set +x 00:13:40.559 21:28:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:40.559 21:28:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:40.559 21:28:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:40.559 21:28:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:40.559 21:28:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:40.559 21:28:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:40.560 21:28:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:40.560 21:28:05 -- nvmf/common.sh@295 -- # net_devs=() 00:13:40.560 21:28:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:40.560 21:28:05 -- nvmf/common.sh@296 -- # e810=() 00:13:40.560 21:28:05 -- nvmf/common.sh@296 -- # local -ga e810 00:13:40.560 21:28:05 -- nvmf/common.sh@297 -- # x722=() 00:13:40.560 21:28:05 -- nvmf/common.sh@297 -- # local -ga x722 00:13:40.560 21:28:05 -- nvmf/common.sh@298 -- # mlx=() 00:13:40.560 21:28:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:40.560 21:28:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.560 21:28:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.560 21:28:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.560 21:28:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.560 21:28:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.560 21:28:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.560 21:28:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.560 21:28:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.560 21:28:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.560 21:28:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.560 21:28:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.560 21:28:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:40.560 21:28:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:40.560 21:28:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:40.560 21:28:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:40.560 21:28:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:40.560 21:28:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:40.560 21:28:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.560 21:28:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:40.560 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:40.560 21:28:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:40.560 21:28:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:40.560 21:28:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.560 21:28:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.560 21:28:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:40.560 21:28:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.560 21:28:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:40.560 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:40.560 21:28:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:40.560 21:28:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:40.560 21:28:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.560 21:28:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.560 21:28:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:40.560 21:28:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:40.560 21:28:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:40.560 21:28:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:40.560 21:28:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.560 21:28:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.560 21:28:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:40.560 21:28:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.560 21:28:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:40.560 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:40.560 21:28:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.560 21:28:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.560 21:28:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.560 21:28:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:40.560 21:28:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.560 21:28:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:40.560 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:40.560 21:28:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.560 21:28:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:40.560 21:28:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:40.560 21:28:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:40.560 21:28:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:40.560 21:28:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:40.560 21:28:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.560 21:28:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.560 21:28:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:40.560 21:28:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:40.560 21:28:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:40.560 21:28:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:40.560 21:28:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:40.560 21:28:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:40.560 21:28:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.560 21:28:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:40.560 21:28:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:40.560 21:28:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:40.560 21:28:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:40.560 21:28:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:40.560 21:28:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:40.560 21:28:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:40.560 21:28:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:40.560 21:28:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:40.560 21:28:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:40.560 21:28:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:40.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:13:40.560 00:13:40.560 --- 10.0.0.2 ping statistics --- 00:13:40.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.560 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:13:40.560 21:28:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:40.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:13:40.560 00:13:40.560 --- 10.0.0.1 ping statistics --- 00:13:40.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.560 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:13:40.560 21:28:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.560 21:28:05 -- nvmf/common.sh@411 -- # return 0 00:13:40.560 21:28:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:40.561 21:28:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.561 21:28:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:40.561 21:28:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:40.561 21:28:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.561 21:28:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:40.561 21:28:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:40.561 21:28:05 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:40.561 21:28:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:40.561 21:28:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:40.561 21:28:05 -- common/autotest_common.sh@10 -- # set +x 00:13:40.561 21:28:05 -- nvmf/common.sh@470 -- # nvmfpid=2595640 00:13:40.561 21:28:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:40.561 21:28:05 -- nvmf/common.sh@471 -- # waitforlisten 2595640 00:13:40.561 21:28:05 -- common/autotest_common.sh@817 -- # '[' -z 2595640 ']' 00:13:40.561 21:28:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.561 21:28:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:40.561 21:28:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.561 21:28:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:40.561 21:28:05 -- common/autotest_common.sh@10 -- # set +x 00:13:40.561 [2024-04-24 21:28:05.935049] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:13:40.561 [2024-04-24 21:28:05.935127] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.561 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.561 [2024-04-24 21:28:06.003785] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.561 [2024-04-24 21:28:06.125486] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.561 [2024-04-24 21:28:06.125549] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.561 [2024-04-24 21:28:06.125566] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.561 [2024-04-24 21:28:06.125580] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.561 [2024-04-24 21:28:06.125592] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.561 [2024-04-24 21:28:06.125705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:40.561 [2024-04-24 21:28:06.125759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:40.561 [2024-04-24 21:28:06.125815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:40.561 [2024-04-24 21:28:06.125818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.497 21:28:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:41.497 21:28:06 -- common/autotest_common.sh@850 -- # return 0 00:13:41.497 21:28:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:41.497 21:28:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:41.497 21:28:06 -- common/autotest_common.sh@10 -- # set +x 00:13:41.497 21:28:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.497 21:28:06 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:41.497 21:28:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.497 21:28:06 -- common/autotest_common.sh@10 -- # set +x 00:13:41.497 [2024-04-24 21:28:06.887539] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.497 21:28:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.497 21:28:06 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:41.497 21:28:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.497 21:28:06 -- common/autotest_common.sh@10 -- # set +x 00:13:41.497 Malloc0 00:13:41.497 21:28:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.497 21:28:06 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:41.497 21:28:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.497 21:28:06 -- common/autotest_common.sh@10 -- # set +x 00:13:41.497 21:28:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.497 21:28:06 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:41.497 21:28:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.497 21:28:06 -- common/autotest_common.sh@10 -- # set +x 00:13:41.497 21:28:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.497 21:28:06 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.497 21:28:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.497 21:28:06 -- common/autotest_common.sh@10 -- # set +x 00:13:41.497 [2024-04-24 21:28:06.941030] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.497 21:28:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.497 21:28:06 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:41.497 21:28:06 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:41.497 21:28:06 -- nvmf/common.sh@521 -- # config=() 00:13:41.497 21:28:06 -- nvmf/common.sh@521 -- # local subsystem config 00:13:41.497 21:28:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:41.497 21:28:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:41.497 { 00:13:41.497 "params": { 00:13:41.497 "name": "Nvme$subsystem", 00:13:41.497 "trtype": "$TEST_TRANSPORT", 00:13:41.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:41.497 "adrfam": "ipv4", 00:13:41.497 "trsvcid": "$NVMF_PORT", 00:13:41.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:41.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:41.497 "hdgst": ${hdgst:-false}, 00:13:41.497 "ddgst": ${ddgst:-false} 00:13:41.497 }, 00:13:41.497 "method": "bdev_nvme_attach_controller" 00:13:41.497 } 00:13:41.497 EOF 00:13:41.497 )") 00:13:41.497 21:28:06 -- nvmf/common.sh@543 -- # cat 00:13:41.497 21:28:06 -- nvmf/common.sh@545 -- # jq . 00:13:41.497 21:28:06 -- nvmf/common.sh@546 -- # IFS=, 00:13:41.497 21:28:06 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:41.497 "params": { 00:13:41.497 "name": "Nvme1", 00:13:41.497 "trtype": "tcp", 00:13:41.497 "traddr": "10.0.0.2", 00:13:41.497 "adrfam": "ipv4", 00:13:41.497 "trsvcid": "4420", 00:13:41.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:41.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:41.497 "hdgst": false, 00:13:41.497 "ddgst": false 00:13:41.497 }, 00:13:41.497 "method": "bdev_nvme_attach_controller" 00:13:41.497 }' 00:13:41.497 [2024-04-24 21:28:06.987051] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:13:41.497 [2024-04-24 21:28:06.987119] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2595794 ] 00:13:41.497 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.497 [2024-04-24 21:28:07.047169] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:41.497 [2024-04-24 21:28:07.160347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.497 [2024-04-24 21:28:07.160406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.497 [2024-04-24 21:28:07.160409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.755 I/O targets: 00:13:41.755 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:41.755 00:13:41.755 00:13:41.755 CUnit - A unit testing framework for C - Version 2.1-3 00:13:41.755 http://cunit.sourceforge.net/ 00:13:41.755 00:13:41.755 00:13:41.755 Suite: bdevio tests on: Nvme1n1 00:13:41.755 Test: blockdev write read block ...passed 00:13:42.014 Test: blockdev write zeroes read block ...passed 00:13:42.014 Test: blockdev write zeroes read no split ...passed 00:13:42.014 Test: blockdev write zeroes read split ...passed 00:13:42.014 Test: blockdev write zeroes read split partial ...passed 00:13:42.014 Test: blockdev reset ...[2024-04-24 21:28:07.583501] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:42.014 [2024-04-24 21:28:07.583610] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bbf60 (9): Bad file descriptor 00:13:42.014 [2024-04-24 21:28:07.642464] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:42.014 passed 00:13:42.014 Test: blockdev write read 8 blocks ...passed 00:13:42.014 Test: blockdev write read size > 128k ...passed 00:13:42.014 Test: blockdev write read invalid size ...passed 00:13:42.014 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:42.014 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:42.014 Test: blockdev write read max offset ...passed 00:13:42.273 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:42.273 Test: blockdev writev readv 8 blocks ...passed 00:13:42.273 Test: blockdev writev readv 30 x 1block ...passed 00:13:42.273 Test: blockdev writev readv block ...passed 00:13:42.273 Test: blockdev writev readv size > 128k ...passed 00:13:42.273 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:42.273 Test: blockdev comparev and writev ...[2024-04-24 21:28:07.900170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.273 [2024-04-24 21:28:07.900207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:42.273 [2024-04-24 21:28:07.900232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.273 [2024-04-24 21:28:07.900251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:42.273 [2024-04-24 21:28:07.900679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.273 [2024-04-24 21:28:07.900705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:42.273 [2024-04-24 21:28:07.900727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.273 [2024-04-24 21:28:07.900744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:42.274 [2024-04-24 21:28:07.901180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.274 [2024-04-24 21:28:07.901204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:42.274 [2024-04-24 21:28:07.901225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.274 [2024-04-24 21:28:07.901241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:42.274 [2024-04-24 21:28:07.901681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.274 [2024-04-24 21:28:07.901707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:42.274 [2024-04-24 21:28:07.901730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.274 [2024-04-24 21:28:07.901747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:42.274 passed 00:13:42.533 Test: blockdev nvme passthru rw ...passed 00:13:42.533 Test: blockdev nvme passthru vendor specific ...[2024-04-24 21:28:07.984079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:42.533 [2024-04-24 21:28:07.984107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:42.533 [2024-04-24 21:28:07.984385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:42.533 [2024-04-24 21:28:07.984409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:42.533 [2024-04-24 21:28:07.984680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:42.533 [2024-04-24 21:28:07.984705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:42.533 [2024-04-24 21:28:07.984982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:42.533 [2024-04-24 21:28:07.985005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:42.533 passed 00:13:42.533 Test: blockdev nvme admin passthru ...passed 00:13:42.533 Test: blockdev copy ...passed 00:13:42.533 00:13:42.533 Run Summary: Type Total Ran Passed Failed Inactive 00:13:42.533 suites 1 1 n/a 0 0 00:13:42.533 tests 23 23 23 0 0 00:13:42.533 asserts 152 152 152 0 n/a 00:13:42.533 00:13:42.533 Elapsed time = 1.343 seconds 00:13:42.793 21:28:08 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.793 21:28:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.793 21:28:08 -- common/autotest_common.sh@10 -- # set +x 00:13:42.793 21:28:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.793 21:28:08 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:42.793 21:28:08 -- target/bdevio.sh@30 -- # nvmftestfini 00:13:42.793 21:28:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:42.793 21:28:08 -- nvmf/common.sh@117 -- # sync 00:13:42.793 21:28:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:42.793 21:28:08 -- nvmf/common.sh@120 -- # set +e 00:13:42.793 21:28:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:42.793 21:28:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:42.793 rmmod nvme_tcp 00:13:42.793 rmmod nvme_fabrics 00:13:42.793 rmmod nvme_keyring 00:13:42.793 21:28:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:42.793 21:28:08 -- nvmf/common.sh@124 -- # set -e 00:13:42.793 21:28:08 -- nvmf/common.sh@125 -- # return 0 00:13:42.793 21:28:08 -- nvmf/common.sh@478 -- # '[' -n 2595640 ']' 00:13:42.793 21:28:08 -- nvmf/common.sh@479 -- # killprocess 2595640 00:13:42.793 21:28:08 -- common/autotest_common.sh@936 -- # '[' -z 2595640 ']' 00:13:42.793 21:28:08 -- common/autotest_common.sh@940 -- # kill -0 2595640 00:13:42.793 21:28:08 -- common/autotest_common.sh@941 -- # uname 00:13:42.793 21:28:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:42.793 21:28:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2595640 00:13:42.793 21:28:08 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:13:42.793 21:28:08 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:13:42.793 21:28:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2595640' 00:13:42.793 killing process with pid 2595640 00:13:42.793 21:28:08 -- common/autotest_common.sh@955 -- # kill 2595640 00:13:42.793 21:28:08 -- common/autotest_common.sh@960 -- # wait 2595640 00:13:43.051 21:28:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:43.051 21:28:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:43.051 21:28:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:43.051 21:28:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:43.051 21:28:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:43.051 21:28:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.051 21:28:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.051 21:28:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.590 21:28:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:45.590 00:13:45.590 real 0m6.932s 00:13:45.590 user 0m13.233s 00:13:45.590 sys 0m2.082s 00:13:45.590 21:28:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:45.590 21:28:10 -- common/autotest_common.sh@10 -- # set +x 00:13:45.590 ************************************ 00:13:45.590 END TEST nvmf_bdevio 00:13:45.590 ************************************ 00:13:45.590 21:28:10 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:13:45.590 21:28:10 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:45.590 21:28:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:45.590 21:28:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:45.590 21:28:10 -- common/autotest_common.sh@10 -- # set +x 00:13:45.590 ************************************ 00:13:45.590 START TEST nvmf_bdevio_no_huge 00:13:45.590 ************************************ 00:13:45.590 21:28:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:45.590 * Looking for test storage... 00:13:45.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:45.590 21:28:10 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:45.590 21:28:10 -- nvmf/common.sh@7 -- # uname -s 00:13:45.590 21:28:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.590 21:28:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.590 21:28:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.590 21:28:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.590 21:28:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.590 21:28:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.590 21:28:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.590 21:28:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.590 21:28:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.590 21:28:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.590 21:28:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:45.590 21:28:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:45.590 21:28:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.590 21:28:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.590 21:28:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:45.590 21:28:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.590 21:28:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:45.590 21:28:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.590 21:28:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.590 21:28:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.591 21:28:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.591 21:28:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.591 21:28:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.591 21:28:10 -- paths/export.sh@5 -- # export PATH 00:13:45.591 21:28:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.591 21:28:10 -- nvmf/common.sh@47 -- # : 0 00:13:45.591 21:28:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:45.591 21:28:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:45.591 21:28:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.591 21:28:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.591 21:28:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.591 21:28:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:45.591 21:28:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:45.591 21:28:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:45.591 21:28:10 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:45.591 21:28:10 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:45.591 21:28:10 -- target/bdevio.sh@14 -- # nvmftestinit 00:13:45.591 21:28:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:45.591 21:28:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.591 21:28:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:45.591 21:28:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:45.591 21:28:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:45.591 21:28:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.591 21:28:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.591 21:28:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.591 21:28:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:45.591 21:28:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:45.591 21:28:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:45.591 21:28:10 -- common/autotest_common.sh@10 -- # set +x 00:13:47.494 21:28:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:47.494 21:28:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:47.494 21:28:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:47.494 21:28:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:47.494 21:28:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:47.494 21:28:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:47.494 21:28:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:47.494 21:28:12 -- nvmf/common.sh@295 -- # net_devs=() 00:13:47.494 21:28:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:47.494 21:28:12 -- nvmf/common.sh@296 -- # e810=() 00:13:47.494 21:28:12 -- nvmf/common.sh@296 -- # local -ga e810 00:13:47.494 21:28:12 -- nvmf/common.sh@297 -- # x722=() 00:13:47.494 21:28:12 -- nvmf/common.sh@297 -- # local -ga x722 00:13:47.494 21:28:12 -- nvmf/common.sh@298 -- # mlx=() 00:13:47.494 21:28:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:47.494 21:28:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.494 21:28:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.494 21:28:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.494 21:28:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.494 21:28:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.494 21:28:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.494 21:28:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.494 21:28:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.494 21:28:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.494 21:28:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.494 21:28:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.494 21:28:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:47.495 21:28:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:47.495 21:28:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:47.495 21:28:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:47.495 21:28:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:47.495 21:28:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:47.495 21:28:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.495 21:28:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:47.495 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:47.495 21:28:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.495 21:28:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.495 21:28:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.495 21:28:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.495 21:28:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.495 21:28:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.495 21:28:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:47.495 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:47.495 21:28:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.495 21:28:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.495 21:28:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.495 21:28:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.495 21:28:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.495 21:28:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:47.495 21:28:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:47.495 21:28:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:47.495 21:28:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.495 21:28:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.495 21:28:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:47.495 21:28:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.495 21:28:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:47.495 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:47.495 21:28:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.495 21:28:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.495 21:28:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.495 21:28:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:47.495 21:28:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.495 21:28:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:47.495 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:47.495 21:28:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.495 21:28:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:47.495 21:28:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:47.495 21:28:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:47.495 21:28:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:47.495 21:28:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:47.495 21:28:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.495 21:28:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.495 21:28:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:47.495 21:28:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:47.495 21:28:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:47.495 21:28:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:47.495 21:28:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:47.495 21:28:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:47.495 21:28:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.495 21:28:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:47.495 21:28:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:47.495 21:28:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:47.495 21:28:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:47.495 21:28:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:47.495 21:28:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:47.495 21:28:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:47.495 21:28:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:47.495 21:28:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:47.495 21:28:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:47.495 21:28:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:47.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:13:47.495 00:13:47.495 --- 10.0.0.2 ping statistics --- 00:13:47.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.495 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:13:47.495 21:28:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:47.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:13:47.495 00:13:47.495 --- 10.0.0.1 ping statistics --- 00:13:47.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.495 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:13:47.495 21:28:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.495 21:28:13 -- nvmf/common.sh@411 -- # return 0 00:13:47.495 21:28:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:47.495 21:28:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.495 21:28:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:47.495 21:28:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:47.495 21:28:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.495 21:28:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:47.495 21:28:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:47.495 21:28:13 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:47.495 21:28:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:47.495 21:28:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:47.495 21:28:13 -- common/autotest_common.sh@10 -- # set +x 00:13:47.495 21:28:13 -- nvmf/common.sh@470 -- # nvmfpid=2597871 00:13:47.495 21:28:13 -- nvmf/common.sh@471 -- # waitforlisten 2597871 00:13:47.495 21:28:13 -- common/autotest_common.sh@817 -- # '[' -z 2597871 ']' 00:13:47.495 21:28:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.495 21:28:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:47.495 21:28:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:47.495 21:28:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.495 21:28:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:47.495 21:28:13 -- common/autotest_common.sh@10 -- # set +x 00:13:47.495 [2024-04-24 21:28:13.121484] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:13:47.495 [2024-04-24 21:28:13.121585] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:47.753 [2024-04-24 21:28:13.197047] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:47.753 [2024-04-24 21:28:13.317743] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.753 [2024-04-24 21:28:13.317791] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.753 [2024-04-24 21:28:13.317805] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.753 [2024-04-24 21:28:13.317823] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.753 [2024-04-24 21:28:13.317834] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.753 [2024-04-24 21:28:13.317930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:47.753 [2024-04-24 21:28:13.317995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:47.753 [2024-04-24 21:28:13.318052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:47.753 [2024-04-24 21:28:13.318055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.688 21:28:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:48.688 21:28:14 -- common/autotest_common.sh@850 -- # return 0 00:13:48.688 21:28:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:48.688 21:28:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:48.688 21:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:48.688 21:28:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.688 21:28:14 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:48.688 21:28:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.688 21:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:48.688 [2024-04-24 21:28:14.081404] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.688 21:28:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.688 21:28:14 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:48.688 21:28:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.688 21:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:48.688 Malloc0 00:13:48.688 21:28:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.688 21:28:14 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:48.688 21:28:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.688 21:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:48.688 21:28:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.688 21:28:14 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:48.688 21:28:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.688 21:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:48.688 21:28:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.688 21:28:14 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.688 21:28:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.688 21:28:14 -- common/autotest_common.sh@10 -- # set +x 00:13:48.688 [2024-04-24 21:28:14.119645] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.688 21:28:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.688 21:28:14 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:48.688 21:28:14 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:48.688 21:28:14 -- nvmf/common.sh@521 -- # config=() 00:13:48.688 21:28:14 -- nvmf/common.sh@521 -- # local subsystem config 00:13:48.688 21:28:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:48.688 21:28:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:48.688 { 00:13:48.688 "params": { 00:13:48.688 "name": "Nvme$subsystem", 00:13:48.688 "trtype": "$TEST_TRANSPORT", 00:13:48.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:48.688 "adrfam": "ipv4", 00:13:48.688 "trsvcid": "$NVMF_PORT", 00:13:48.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:48.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:48.688 "hdgst": ${hdgst:-false}, 00:13:48.688 "ddgst": ${ddgst:-false} 00:13:48.688 }, 00:13:48.688 "method": "bdev_nvme_attach_controller" 00:13:48.688 } 00:13:48.688 EOF 00:13:48.688 )") 00:13:48.688 21:28:14 -- nvmf/common.sh@543 -- # cat 00:13:48.688 21:28:14 -- nvmf/common.sh@545 -- # jq . 00:13:48.688 21:28:14 -- nvmf/common.sh@546 -- # IFS=, 00:13:48.688 21:28:14 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:48.688 "params": { 00:13:48.688 "name": "Nvme1", 00:13:48.688 "trtype": "tcp", 00:13:48.688 "traddr": "10.0.0.2", 00:13:48.688 "adrfam": "ipv4", 00:13:48.688 "trsvcid": "4420", 00:13:48.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:48.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:48.688 "hdgst": false, 00:13:48.688 "ddgst": false 00:13:48.688 }, 00:13:48.688 "method": "bdev_nvme_attach_controller" 00:13:48.688 }' 00:13:48.688 [2024-04-24 21:28:14.164929] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:13:48.688 [2024-04-24 21:28:14.165047] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2598027 ] 00:13:48.688 [2024-04-24 21:28:14.228279] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:48.688 [2024-04-24 21:28:14.343164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.688 [2024-04-24 21:28:14.343213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.688 [2024-04-24 21:28:14.343217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.947 I/O targets: 00:13:48.947 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:48.947 00:13:48.947 00:13:48.948 CUnit - A unit testing framework for C - Version 2.1-3 00:13:48.948 http://cunit.sourceforge.net/ 00:13:48.948 00:13:48.948 00:13:48.948 Suite: bdevio tests on: Nvme1n1 00:13:48.948 Test: blockdev write read block ...passed 00:13:49.206 Test: blockdev write zeroes read block ...passed 00:13:49.206 Test: blockdev write zeroes read no split ...passed 00:13:49.206 Test: blockdev write zeroes read split ...passed 00:13:49.206 Test: blockdev write zeroes read split partial ...passed 00:13:49.206 Test: blockdev reset ...[2024-04-24 21:28:14.765125] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:49.206 [2024-04-24 21:28:14.765238] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e945c0 (9): Bad file descriptor 00:13:49.464 [2024-04-24 21:28:14.914412] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:49.464 passed 00:13:49.464 Test: blockdev write read 8 blocks ...passed 00:13:49.464 Test: blockdev write read size > 128k ...passed 00:13:49.464 Test: blockdev write read invalid size ...passed 00:13:49.464 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:49.464 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:49.464 Test: blockdev write read max offset ...passed 00:13:49.464 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:49.464 Test: blockdev writev readv 8 blocks ...passed 00:13:49.464 Test: blockdev writev readv 30 x 1block ...passed 00:13:49.464 Test: blockdev writev readv block ...passed 00:13:49.464 Test: blockdev writev readv size > 128k ...passed 00:13:49.464 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:49.464 Test: blockdev comparev and writev ...[2024-04-24 21:28:15.091888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:49.464 [2024-04-24 21:28:15.091924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:49.464 [2024-04-24 21:28:15.091948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:49.464 [2024-04-24 21:28:15.091964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:49.464 [2024-04-24 21:28:15.092357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:49.464 [2024-04-24 21:28:15.092382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:49.464 [2024-04-24 21:28:15.092403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:49.464 [2024-04-24 21:28:15.092420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:49.464 [2024-04-24 21:28:15.092828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:49.464 [2024-04-24 21:28:15.092854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:49.464 [2024-04-24 21:28:15.092875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:49.464 [2024-04-24 21:28:15.092892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:49.464 [2024-04-24 21:28:15.093302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:49.464 [2024-04-24 21:28:15.093327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:49.464 [2024-04-24 21:28:15.093348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:49.464 [2024-04-24 21:28:15.093364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:49.464 passed 00:13:49.725 Test: blockdev nvme passthru rw ...passed 00:13:49.725 Test: blockdev nvme passthru vendor specific ...[2024-04-24 21:28:15.177054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:49.725 [2024-04-24 21:28:15.177082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:49.725 [2024-04-24 21:28:15.177318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:49.725 [2024-04-24 21:28:15.177341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:49.725 [2024-04-24 21:28:15.177571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:49.725 [2024-04-24 21:28:15.177594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:49.725 [2024-04-24 21:28:15.177841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:49.725 [2024-04-24 21:28:15.177866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:49.725 passed 00:13:49.725 Test: blockdev nvme admin passthru ...passed 00:13:49.725 Test: blockdev copy ...passed 00:13:49.725 00:13:49.725 Run Summary: Type Total Ran Passed Failed Inactive 00:13:49.725 suites 1 1 n/a 0 0 00:13:49.725 tests 23 23 23 0 0 00:13:49.725 asserts 152 152 152 0 n/a 00:13:49.725 00:13:49.725 Elapsed time = 1.381 seconds 00:13:49.983 21:28:15 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:49.983 21:28:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:49.983 21:28:15 -- common/autotest_common.sh@10 -- # set +x 00:13:49.983 21:28:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:49.983 21:28:15 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:49.983 21:28:15 -- target/bdevio.sh@30 -- # nvmftestfini 00:13:49.983 21:28:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:49.983 21:28:15 -- nvmf/common.sh@117 -- # sync 00:13:49.983 21:28:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:49.983 21:28:15 -- nvmf/common.sh@120 -- # set +e 00:13:49.983 21:28:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:49.983 21:28:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:49.983 rmmod nvme_tcp 00:13:49.983 rmmod nvme_fabrics 00:13:49.983 rmmod nvme_keyring 00:13:50.242 21:28:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:50.242 21:28:15 -- nvmf/common.sh@124 -- # set -e 00:13:50.242 21:28:15 -- nvmf/common.sh@125 -- # return 0 00:13:50.242 21:28:15 -- nvmf/common.sh@478 -- # '[' -n 2597871 ']' 00:13:50.242 21:28:15 -- nvmf/common.sh@479 -- # killprocess 2597871 00:13:50.242 21:28:15 -- common/autotest_common.sh@936 -- # '[' -z 2597871 ']' 00:13:50.242 21:28:15 -- common/autotest_common.sh@940 -- # kill -0 2597871 00:13:50.242 21:28:15 -- common/autotest_common.sh@941 -- # uname 00:13:50.242 21:28:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:50.242 21:28:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2597871 00:13:50.242 21:28:15 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:13:50.242 21:28:15 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:13:50.242 21:28:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2597871' 00:13:50.242 killing process with pid 2597871 00:13:50.242 21:28:15 -- common/autotest_common.sh@955 -- # kill 2597871 00:13:50.242 21:28:15 -- common/autotest_common.sh@960 -- # wait 2597871 00:13:50.500 21:28:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:50.500 21:28:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:50.500 21:28:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:50.500 21:28:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:50.500 21:28:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:50.500 21:28:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.500 21:28:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.500 21:28:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.033 21:28:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:53.033 00:13:53.033 real 0m7.313s 00:13:53.033 user 0m14.017s 00:13:53.033 sys 0m2.599s 00:13:53.033 21:28:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:53.033 21:28:18 -- common/autotest_common.sh@10 -- # set +x 00:13:53.033 ************************************ 00:13:53.033 END TEST nvmf_bdevio_no_huge 00:13:53.033 ************************************ 00:13:53.033 21:28:18 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:53.033 21:28:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:53.033 21:28:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:53.033 21:28:18 -- common/autotest_common.sh@10 -- # set +x 00:13:53.033 ************************************ 00:13:53.033 START TEST nvmf_tls 00:13:53.033 ************************************ 00:13:53.033 21:28:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:53.033 * Looking for test storage... 00:13:53.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:53.033 21:28:18 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:53.033 21:28:18 -- nvmf/common.sh@7 -- # uname -s 00:13:53.033 21:28:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.033 21:28:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.033 21:28:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.033 21:28:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.033 21:28:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.033 21:28:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.033 21:28:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.033 21:28:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.033 21:28:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.033 21:28:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.033 21:28:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:53.033 21:28:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:53.033 21:28:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.033 21:28:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.033 21:28:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:53.033 21:28:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:53.033 21:28:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:53.033 21:28:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.033 21:28:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.033 21:28:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.033 21:28:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.033 21:28:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.033 21:28:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.033 21:28:18 -- paths/export.sh@5 -- # export PATH 00:13:53.033 21:28:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.033 21:28:18 -- nvmf/common.sh@47 -- # : 0 00:13:53.033 21:28:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:53.033 21:28:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:53.034 21:28:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:53.034 21:28:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.034 21:28:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.034 21:28:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:53.034 21:28:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:53.034 21:28:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:53.034 21:28:18 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:53.034 21:28:18 -- target/tls.sh@62 -- # nvmftestinit 00:13:53.034 21:28:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:53.034 21:28:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:53.034 21:28:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:53.034 21:28:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:53.034 21:28:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:53.034 21:28:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.034 21:28:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:53.034 21:28:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.034 21:28:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:53.034 21:28:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:53.034 21:28:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:53.034 21:28:18 -- common/autotest_common.sh@10 -- # set +x 00:13:54.935 21:28:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:54.935 21:28:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:54.935 21:28:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:54.935 21:28:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:54.935 21:28:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:54.935 21:28:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:54.935 21:28:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:54.935 21:28:20 -- nvmf/common.sh@295 -- # net_devs=() 00:13:54.936 21:28:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:54.936 21:28:20 -- nvmf/common.sh@296 -- # e810=() 00:13:54.936 21:28:20 -- nvmf/common.sh@296 -- # local -ga e810 00:13:54.936 21:28:20 -- nvmf/common.sh@297 -- # x722=() 00:13:54.936 21:28:20 -- nvmf/common.sh@297 -- # local -ga x722 00:13:54.936 21:28:20 -- nvmf/common.sh@298 -- # mlx=() 00:13:54.936 21:28:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:54.936 21:28:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:54.936 21:28:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:54.936 21:28:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:54.936 21:28:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:54.936 21:28:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:54.936 21:28:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:54.936 21:28:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:54.936 21:28:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:54.936 21:28:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:54.936 21:28:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:54.936 21:28:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:54.936 21:28:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:54.936 21:28:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:54.936 21:28:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:54.936 21:28:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:54.936 21:28:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:54.936 21:28:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:54.936 21:28:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.936 21:28:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:54.936 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:54.936 21:28:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:54.936 21:28:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:54.936 21:28:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.936 21:28:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.936 21:28:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:54.936 21:28:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.936 21:28:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:54.936 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:54.936 21:28:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:54.936 21:28:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:54.936 21:28:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.936 21:28:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.936 21:28:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:54.936 21:28:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:54.936 21:28:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:54.936 21:28:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:54.936 21:28:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.936 21:28:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.936 21:28:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:54.936 21:28:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.936 21:28:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:54.936 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:54.936 21:28:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.936 21:28:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.936 21:28:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.936 21:28:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:54.936 21:28:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.936 21:28:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:54.936 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:54.936 21:28:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.936 21:28:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:54.936 21:28:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:54.936 21:28:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:54.936 21:28:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:54.936 21:28:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:54.936 21:28:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:54.936 21:28:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:54.936 21:28:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:54.936 21:28:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:54.936 21:28:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:54.936 21:28:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:54.936 21:28:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:54.936 21:28:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:54.936 21:28:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:54.936 21:28:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:54.936 21:28:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:54.936 21:28:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:54.936 21:28:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:54.936 21:28:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:54.936 21:28:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:54.936 21:28:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:54.936 21:28:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:54.936 21:28:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:54.936 21:28:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:54.936 21:28:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:54.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:54.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:13:54.936 00:13:54.936 --- 10.0.0.2 ping statistics --- 00:13:54.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.936 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:13:54.936 21:28:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:54.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:13:54.936 00:13:54.936 --- 10.0.0.1 ping statistics --- 00:13:54.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.936 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:13:54.936 21:28:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.936 21:28:20 -- nvmf/common.sh@411 -- # return 0 00:13:54.936 21:28:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:54.936 21:28:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.936 21:28:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:54.936 21:28:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:54.936 21:28:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.936 21:28:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:54.936 21:28:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:54.936 21:28:20 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:54.936 21:28:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:54.936 21:28:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:54.936 21:28:20 -- common/autotest_common.sh@10 -- # set +x 00:13:54.936 21:28:20 -- nvmf/common.sh@470 -- # nvmfpid=2600230 00:13:54.936 21:28:20 -- nvmf/common.sh@471 -- # waitforlisten 2600230 00:13:54.936 21:28:20 -- common/autotest_common.sh@817 -- # '[' -z 2600230 ']' 00:13:54.936 21:28:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.936 21:28:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:54.936 21:28:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.936 21:28:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:54.936 21:28:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:54.936 21:28:20 -- common/autotest_common.sh@10 -- # set +x 00:13:54.936 [2024-04-24 21:28:20.563273] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:13:54.936 [2024-04-24 21:28:20.563374] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.936 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.195 [2024-04-24 21:28:20.636902] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.195 [2024-04-24 21:28:20.752218] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.195 [2024-04-24 21:28:20.752282] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.195 [2024-04-24 21:28:20.752298] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.195 [2024-04-24 21:28:20.752311] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.195 [2024-04-24 21:28:20.752323] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.195 [2024-04-24 21:28:20.752356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.128 21:28:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:56.128 21:28:21 -- common/autotest_common.sh@850 -- # return 0 00:13:56.128 21:28:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:56.128 21:28:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:56.128 21:28:21 -- common/autotest_common.sh@10 -- # set +x 00:13:56.128 21:28:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.128 21:28:21 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:56.128 21:28:21 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:56.128 true 00:13:56.128 21:28:21 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:56.128 21:28:21 -- target/tls.sh@73 -- # jq -r .tls_version 00:13:56.386 21:28:22 -- target/tls.sh@73 -- # version=0 00:13:56.386 21:28:22 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:56.386 21:28:22 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:56.644 21:28:22 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:56.644 21:28:22 -- target/tls.sh@81 -- # jq -r .tls_version 00:13:56.902 21:28:22 -- target/tls.sh@81 -- # version=13 00:13:56.902 21:28:22 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:56.902 21:28:22 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:57.160 21:28:22 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:57.160 21:28:22 -- target/tls.sh@89 -- # jq -r .tls_version 00:13:57.417 21:28:22 -- target/tls.sh@89 -- # version=7 00:13:57.417 21:28:22 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:57.417 21:28:22 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:57.417 21:28:22 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:57.675 21:28:23 -- target/tls.sh@96 -- # ktls=false 00:13:57.675 21:28:23 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:57.675 21:28:23 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:57.932 21:28:23 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:57.932 21:28:23 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:58.189 21:28:23 -- target/tls.sh@104 -- # ktls=true 00:13:58.190 21:28:23 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:58.190 21:28:23 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:58.448 21:28:23 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:58.448 21:28:23 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:58.706 21:28:24 -- target/tls.sh@112 -- # ktls=false 00:13:58.706 21:28:24 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:58.706 21:28:24 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:58.706 21:28:24 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:58.706 21:28:24 -- nvmf/common.sh@691 -- # local prefix key digest 00:13:58.706 21:28:24 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:13:58.706 21:28:24 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:13:58.706 21:28:24 -- nvmf/common.sh@693 -- # digest=1 00:13:58.706 21:28:24 -- nvmf/common.sh@694 -- # python - 00:13:58.706 21:28:24 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:58.706 21:28:24 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:58.706 21:28:24 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:58.706 21:28:24 -- nvmf/common.sh@691 -- # local prefix key digest 00:13:58.706 21:28:24 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:13:58.706 21:28:24 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:13:58.706 21:28:24 -- nvmf/common.sh@693 -- # digest=1 00:13:58.706 21:28:24 -- nvmf/common.sh@694 -- # python - 00:13:58.706 21:28:24 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:58.706 21:28:24 -- target/tls.sh@121 -- # mktemp 00:13:58.706 21:28:24 -- target/tls.sh@121 -- # key_path=/tmp/tmp.xOe0tgigMk 00:13:58.706 21:28:24 -- target/tls.sh@122 -- # mktemp 00:13:58.706 21:28:24 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.itaE5SGSRx 00:13:58.706 21:28:24 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:58.706 21:28:24 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:58.706 21:28:24 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.xOe0tgigMk 00:13:58.706 21:28:24 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.itaE5SGSRx 00:13:58.706 21:28:24 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:58.964 21:28:24 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:13:59.531 21:28:24 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.xOe0tgigMk 00:13:59.531 21:28:24 -- target/tls.sh@49 -- # local key=/tmp/tmp.xOe0tgigMk 00:13:59.531 21:28:24 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:59.788 [2024-04-24 21:28:25.210826] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.788 21:28:25 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:00.045 21:28:25 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:00.304 [2024-04-24 21:28:25.736217] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:00.304 [2024-04-24 21:28:25.736461] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.304 21:28:25 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:00.563 malloc0 00:14:00.563 21:28:25 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:00.563 21:28:26 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xOe0tgigMk 00:14:00.821 [2024-04-24 21:28:26.454395] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:00.821 21:28:26 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.xOe0tgigMk 00:14:01.079 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.081 Initializing NVMe Controllers 00:14:11.081 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:11.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:11.081 Initialization complete. Launching workers. 00:14:11.081 ======================================================== 00:14:11.081 Latency(us) 00:14:11.081 Device Information : IOPS MiB/s Average min max 00:14:11.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7655.74 29.91 8362.57 1282.34 12805.96 00:14:11.081 ======================================================== 00:14:11.081 Total : 7655.74 29.91 8362.57 1282.34 12805.96 00:14:11.081 00:14:11.081 21:28:36 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xOe0tgigMk 00:14:11.081 21:28:36 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:11.081 21:28:36 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:11.081 21:28:36 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:11.081 21:28:36 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xOe0tgigMk' 00:14:11.081 21:28:36 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:11.081 21:28:36 -- target/tls.sh@28 -- # bdevperf_pid=2602140 00:14:11.081 21:28:36 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:11.081 21:28:36 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:11.081 21:28:36 -- target/tls.sh@31 -- # waitforlisten 2602140 /var/tmp/bdevperf.sock 00:14:11.081 21:28:36 -- common/autotest_common.sh@817 -- # '[' -z 2602140 ']' 00:14:11.081 21:28:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:11.081 21:28:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:11.081 21:28:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:11.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:11.081 21:28:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:11.081 21:28:36 -- common/autotest_common.sh@10 -- # set +x 00:14:11.081 [2024-04-24 21:28:36.625979] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:14:11.081 [2024-04-24 21:28:36.626059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2602140 ] 00:14:11.081 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.081 [2024-04-24 21:28:36.683432] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.339 [2024-04-24 21:28:36.787286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.339 21:28:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:11.339 21:28:36 -- common/autotest_common.sh@850 -- # return 0 00:14:11.339 21:28:36 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xOe0tgigMk 00:14:11.597 [2024-04-24 21:28:37.165395] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:11.597 [2024-04-24 21:28:37.165509] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:11.597 TLSTESTn1 00:14:11.597 21:28:37 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:11.854 Running I/O for 10 seconds... 00:14:21.821 00:14:21.821 Latency(us) 00:14:21.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.821 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:21.821 Verification LBA range: start 0x0 length 0x2000 00:14:21.821 TLSTESTn1 : 10.07 1564.19 6.11 0.00 0.00 81567.60 8689.59 120392.06 00:14:21.821 =================================================================================================================== 00:14:21.821 Total : 1564.19 6.11 0.00 0.00 81567.60 8689.59 120392.06 00:14:21.821 0 00:14:21.821 21:28:47 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:21.821 21:28:47 -- target/tls.sh@45 -- # killprocess 2602140 00:14:21.821 21:28:47 -- common/autotest_common.sh@936 -- # '[' -z 2602140 ']' 00:14:21.821 21:28:47 -- common/autotest_common.sh@940 -- # kill -0 2602140 00:14:21.821 21:28:47 -- common/autotest_common.sh@941 -- # uname 00:14:21.821 21:28:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:21.821 21:28:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2602140 00:14:22.079 21:28:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:22.079 21:28:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:22.079 21:28:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2602140' 00:14:22.079 killing process with pid 2602140 00:14:22.079 21:28:47 -- common/autotest_common.sh@955 -- # kill 2602140 00:14:22.079 Received shutdown signal, test time was about 10.000000 seconds 00:14:22.079 00:14:22.079 Latency(us) 00:14:22.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.079 =================================================================================================================== 00:14:22.079 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:22.079 [2024-04-24 21:28:47.505528] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:22.079 21:28:47 -- common/autotest_common.sh@960 -- # wait 2602140 00:14:22.337 21:28:47 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.itaE5SGSRx 00:14:22.337 21:28:47 -- common/autotest_common.sh@638 -- # local es=0 00:14:22.337 21:28:47 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.itaE5SGSRx 00:14:22.337 21:28:47 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:22.337 21:28:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:22.337 21:28:47 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:22.337 21:28:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:22.337 21:28:47 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.itaE5SGSRx 00:14:22.337 21:28:47 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:22.337 21:28:47 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:22.337 21:28:47 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:22.337 21:28:47 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.itaE5SGSRx' 00:14:22.337 21:28:47 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:22.337 21:28:47 -- target/tls.sh@28 -- # bdevperf_pid=2603454 00:14:22.337 21:28:47 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:22.337 21:28:47 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:22.337 21:28:47 -- target/tls.sh@31 -- # waitforlisten 2603454 /var/tmp/bdevperf.sock 00:14:22.337 21:28:47 -- common/autotest_common.sh@817 -- # '[' -z 2603454 ']' 00:14:22.337 21:28:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:22.337 21:28:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:22.337 21:28:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:22.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:22.337 21:28:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:22.337 21:28:47 -- common/autotest_common.sh@10 -- # set +x 00:14:22.337 [2024-04-24 21:28:47.812094] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:14:22.337 [2024-04-24 21:28:47.812169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2603454 ] 00:14:22.337 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.337 [2024-04-24 21:28:47.868805] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.337 [2024-04-24 21:28:47.971762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.595 21:28:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:22.595 21:28:48 -- common/autotest_common.sh@850 -- # return 0 00:14:22.595 21:28:48 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.itaE5SGSRx 00:14:22.853 [2024-04-24 21:28:48.298239] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:22.853 [2024-04-24 21:28:48.298342] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:22.853 [2024-04-24 21:28:48.307545] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:22.853 [2024-04-24 21:28:48.308367] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b35230 (107): Transport endpoint is not connected 00:14:22.853 [2024-04-24 21:28:48.309359] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b35230 (9): Bad file descriptor 00:14:22.853 [2024-04-24 21:28:48.310358] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:22.853 [2024-04-24 21:28:48.310379] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:22.853 [2024-04-24 21:28:48.310392] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:22.853 request: 00:14:22.853 { 00:14:22.853 "name": "TLSTEST", 00:14:22.853 "trtype": "tcp", 00:14:22.853 "traddr": "10.0.0.2", 00:14:22.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:22.853 "adrfam": "ipv4", 00:14:22.853 "trsvcid": "4420", 00:14:22.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.853 "psk": "/tmp/tmp.itaE5SGSRx", 00:14:22.853 "method": "bdev_nvme_attach_controller", 00:14:22.853 "req_id": 1 00:14:22.853 } 00:14:22.853 Got JSON-RPC error response 00:14:22.853 response: 00:14:22.853 { 00:14:22.853 "code": -32602, 00:14:22.853 "message": "Invalid parameters" 00:14:22.853 } 00:14:22.853 21:28:48 -- target/tls.sh@36 -- # killprocess 2603454 00:14:22.853 21:28:48 -- common/autotest_common.sh@936 -- # '[' -z 2603454 ']' 00:14:22.853 21:28:48 -- common/autotest_common.sh@940 -- # kill -0 2603454 00:14:22.853 21:28:48 -- common/autotest_common.sh@941 -- # uname 00:14:22.853 21:28:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:22.853 21:28:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2603454 00:14:22.853 21:28:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:22.853 21:28:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:22.853 21:28:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2603454' 00:14:22.853 killing process with pid 2603454 00:14:22.853 21:28:48 -- common/autotest_common.sh@955 -- # kill 2603454 00:14:22.853 Received shutdown signal, test time was about 10.000000 seconds 00:14:22.853 00:14:22.853 Latency(us) 00:14:22.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.853 =================================================================================================================== 00:14:22.853 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:22.853 [2024-04-24 21:28:48.361906] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:22.853 21:28:48 -- common/autotest_common.sh@960 -- # wait 2603454 00:14:23.112 21:28:48 -- target/tls.sh@37 -- # return 1 00:14:23.112 21:28:48 -- common/autotest_common.sh@641 -- # es=1 00:14:23.112 21:28:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:23.112 21:28:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:23.112 21:28:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:23.112 21:28:48 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xOe0tgigMk 00:14:23.112 21:28:48 -- common/autotest_common.sh@638 -- # local es=0 00:14:23.112 21:28:48 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xOe0tgigMk 00:14:23.112 21:28:48 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:23.112 21:28:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:23.112 21:28:48 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:23.112 21:28:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:23.112 21:28:48 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xOe0tgigMk 00:14:23.112 21:28:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:23.112 21:28:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:23.112 21:28:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:23.112 21:28:48 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xOe0tgigMk' 00:14:23.112 21:28:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:23.112 21:28:48 -- target/tls.sh@28 -- # bdevperf_pid=2603504 00:14:23.112 21:28:48 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:23.112 21:28:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:23.112 21:28:48 -- target/tls.sh@31 -- # waitforlisten 2603504 /var/tmp/bdevperf.sock 00:14:23.112 21:28:48 -- common/autotest_common.sh@817 -- # '[' -z 2603504 ']' 00:14:23.112 21:28:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:23.112 21:28:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:23.112 21:28:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:23.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:23.112 21:28:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:23.112 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:14:23.112 [2024-04-24 21:28:48.670640] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:14:23.112 [2024-04-24 21:28:48.670729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2603504 ] 00:14:23.112 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.112 [2024-04-24 21:28:48.736792] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.378 [2024-04-24 21:28:48.852837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.378 21:28:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:23.378 21:28:48 -- common/autotest_common.sh@850 -- # return 0 00:14:23.378 21:28:48 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.xOe0tgigMk 00:14:23.643 [2024-04-24 21:28:49.201962] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:23.643 [2024-04-24 21:28:49.202073] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:23.643 [2024-04-24 21:28:49.209752] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:23.643 [2024-04-24 21:28:49.209787] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:23.643 [2024-04-24 21:28:49.209841] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:23.643 [2024-04-24 21:28:49.210869] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeb230 (107): Transport endpoint is not connected 00:14:23.643 [2024-04-24 21:28:49.211861] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeb230 (9): Bad file descriptor 00:14:23.643 [2024-04-24 21:28:49.212860] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:23.643 [2024-04-24 21:28:49.212880] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:23.643 [2024-04-24 21:28:49.212892] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:23.643 request: 00:14:23.643 { 00:14:23.643 "name": "TLSTEST", 00:14:23.643 "trtype": "tcp", 00:14:23.643 "traddr": "10.0.0.2", 00:14:23.643 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:23.643 "adrfam": "ipv4", 00:14:23.643 "trsvcid": "4420", 00:14:23.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:23.643 "psk": "/tmp/tmp.xOe0tgigMk", 00:14:23.643 "method": "bdev_nvme_attach_controller", 00:14:23.643 "req_id": 1 00:14:23.643 } 00:14:23.643 Got JSON-RPC error response 00:14:23.643 response: 00:14:23.643 { 00:14:23.643 "code": -32602, 00:14:23.643 "message": "Invalid parameters" 00:14:23.643 } 00:14:23.643 21:28:49 -- target/tls.sh@36 -- # killprocess 2603504 00:14:23.643 21:28:49 -- common/autotest_common.sh@936 -- # '[' -z 2603504 ']' 00:14:23.643 21:28:49 -- common/autotest_common.sh@940 -- # kill -0 2603504 00:14:23.643 21:28:49 -- common/autotest_common.sh@941 -- # uname 00:14:23.643 21:28:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:23.643 21:28:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2603504 00:14:23.643 21:28:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:23.643 21:28:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:23.643 21:28:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2603504' 00:14:23.643 killing process with pid 2603504 00:14:23.643 21:28:49 -- common/autotest_common.sh@955 -- # kill 2603504 00:14:23.643 Received shutdown signal, test time was about 10.000000 seconds 00:14:23.643 00:14:23.643 Latency(us) 00:14:23.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.643 =================================================================================================================== 00:14:23.643 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:23.643 [2024-04-24 21:28:49.262945] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:23.643 21:28:49 -- common/autotest_common.sh@960 -- # wait 2603504 00:14:23.901 21:28:49 -- target/tls.sh@37 -- # return 1 00:14:23.901 21:28:49 -- common/autotest_common.sh@641 -- # es=1 00:14:23.901 21:28:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:23.901 21:28:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:23.901 21:28:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:23.901 21:28:49 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xOe0tgigMk 00:14:23.901 21:28:49 -- common/autotest_common.sh@638 -- # local es=0 00:14:23.901 21:28:49 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xOe0tgigMk 00:14:23.901 21:28:49 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:23.901 21:28:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:23.901 21:28:49 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:23.901 21:28:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:23.901 21:28:49 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xOe0tgigMk 00:14:23.901 21:28:49 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:23.901 21:28:49 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:23.901 21:28:49 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:23.901 21:28:49 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xOe0tgigMk' 00:14:23.901 21:28:49 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:23.901 21:28:49 -- target/tls.sh@28 -- # bdevperf_pid=2603611 00:14:23.901 21:28:49 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:23.901 21:28:49 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:23.901 21:28:49 -- target/tls.sh@31 -- # waitforlisten 2603611 /var/tmp/bdevperf.sock 00:14:23.901 21:28:49 -- common/autotest_common.sh@817 -- # '[' -z 2603611 ']' 00:14:23.901 21:28:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:23.901 21:28:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:23.901 21:28:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:23.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:23.901 21:28:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:23.901 21:28:49 -- common/autotest_common.sh@10 -- # set +x 00:14:23.901 [2024-04-24 21:28:49.567640] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:14:23.901 [2024-04-24 21:28:49.567738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2603611 ] 00:14:24.160 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.160 [2024-04-24 21:28:49.625944] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.160 [2024-04-24 21:28:49.728016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.160 21:28:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:24.160 21:28:49 -- common/autotest_common.sh@850 -- # return 0 00:14:24.160 21:28:49 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xOe0tgigMk 00:14:24.418 [2024-04-24 21:28:50.063414] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:24.418 [2024-04-24 21:28:50.063543] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:24.418 [2024-04-24 21:28:50.072037] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:24.418 [2024-04-24 21:28:50.072072] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:24.418 [2024-04-24 21:28:50.072127] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:24.418 [2024-04-24 21:28:50.072702] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b7230 (107): Transport endpoint is not connected 00:14:24.418 [2024-04-24 21:28:50.073696] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b7230 (9): Bad file descriptor 00:14:24.418 [2024-04-24 21:28:50.074694] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:24.418 [2024-04-24 21:28:50.074717] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:24.418 [2024-04-24 21:28:50.074730] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:24.418 request: 00:14:24.418 { 00:14:24.418 "name": "TLSTEST", 00:14:24.418 "trtype": "tcp", 00:14:24.418 "traddr": "10.0.0.2", 00:14:24.418 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:24.418 "adrfam": "ipv4", 00:14:24.418 "trsvcid": "4420", 00:14:24.418 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:24.418 "psk": "/tmp/tmp.xOe0tgigMk", 00:14:24.418 "method": "bdev_nvme_attach_controller", 00:14:24.418 "req_id": 1 00:14:24.418 } 00:14:24.418 Got JSON-RPC error response 00:14:24.418 response: 00:14:24.418 { 00:14:24.418 "code": -32602, 00:14:24.418 "message": "Invalid parameters" 00:14:24.418 } 00:14:24.418 21:28:50 -- target/tls.sh@36 -- # killprocess 2603611 00:14:24.418 21:28:50 -- common/autotest_common.sh@936 -- # '[' -z 2603611 ']' 00:14:24.418 21:28:50 -- common/autotest_common.sh@940 -- # kill -0 2603611 00:14:24.418 21:28:50 -- common/autotest_common.sh@941 -- # uname 00:14:24.677 21:28:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:24.677 21:28:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2603611 00:14:24.677 21:28:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:24.677 21:28:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:24.677 21:28:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2603611' 00:14:24.677 killing process with pid 2603611 00:14:24.677 21:28:50 -- common/autotest_common.sh@955 -- # kill 2603611 00:14:24.677 Received shutdown signal, test time was about 10.000000 seconds 00:14:24.677 00:14:24.677 Latency(us) 00:14:24.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.677 =================================================================================================================== 00:14:24.677 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:24.677 [2024-04-24 21:28:50.126325] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:24.677 21:28:50 -- common/autotest_common.sh@960 -- # wait 2603611 00:14:24.936 21:28:50 -- target/tls.sh@37 -- # return 1 00:14:24.936 21:28:50 -- common/autotest_common.sh@641 -- # es=1 00:14:24.936 21:28:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:24.936 21:28:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:24.936 21:28:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:24.936 21:28:50 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:24.936 21:28:50 -- common/autotest_common.sh@638 -- # local es=0 00:14:24.936 21:28:50 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:24.936 21:28:50 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:24.936 21:28:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:24.936 21:28:50 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:24.936 21:28:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:24.936 21:28:50 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:24.936 21:28:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:24.936 21:28:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:24.936 21:28:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:24.936 21:28:50 -- target/tls.sh@23 -- # psk= 00:14:24.936 21:28:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:24.936 21:28:50 -- target/tls.sh@28 -- # bdevperf_pid=2603751 00:14:24.936 21:28:50 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:24.936 21:28:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:24.936 21:28:50 -- target/tls.sh@31 -- # waitforlisten 2603751 /var/tmp/bdevperf.sock 00:14:24.936 21:28:50 -- common/autotest_common.sh@817 -- # '[' -z 2603751 ']' 00:14:24.936 21:28:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:24.936 21:28:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:24.936 21:28:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:24.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:24.936 21:28:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:24.936 21:28:50 -- common/autotest_common.sh@10 -- # set +x 00:14:24.936 [2024-04-24 21:28:50.433951] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:14:24.936 [2024-04-24 21:28:50.434044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2603751 ] 00:14:24.936 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.936 [2024-04-24 21:28:50.493139] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.936 [2024-04-24 21:28:50.600232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.194 21:28:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:25.194 21:28:50 -- common/autotest_common.sh@850 -- # return 0 00:14:25.194 21:28:50 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:25.453 [2024-04-24 21:28:50.954400] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:25.453 [2024-04-24 21:28:50.956302] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9ba0 (9): Bad file descriptor 00:14:25.453 [2024-04-24 21:28:50.957297] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:25.453 [2024-04-24 21:28:50.957318] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:25.453 [2024-04-24 21:28:50.957331] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:25.453 request: 00:14:25.453 { 00:14:25.453 "name": "TLSTEST", 00:14:25.453 "trtype": "tcp", 00:14:25.453 "traddr": "10.0.0.2", 00:14:25.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:25.453 "adrfam": "ipv4", 00:14:25.453 "trsvcid": "4420", 00:14:25.453 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.453 "method": "bdev_nvme_attach_controller", 00:14:25.453 "req_id": 1 00:14:25.453 } 00:14:25.453 Got JSON-RPC error response 00:14:25.453 response: 00:14:25.453 { 00:14:25.453 "code": -32602, 00:14:25.453 "message": "Invalid parameters" 00:14:25.453 } 00:14:25.453 21:28:50 -- target/tls.sh@36 -- # killprocess 2603751 00:14:25.453 21:28:50 -- common/autotest_common.sh@936 -- # '[' -z 2603751 ']' 00:14:25.453 21:28:50 -- common/autotest_common.sh@940 -- # kill -0 2603751 00:14:25.453 21:28:50 -- common/autotest_common.sh@941 -- # uname 00:14:25.453 21:28:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:25.453 21:28:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2603751 00:14:25.453 21:28:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:25.453 21:28:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:25.453 21:28:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2603751' 00:14:25.453 killing process with pid 2603751 00:14:25.453 21:28:50 -- common/autotest_common.sh@955 -- # kill 2603751 00:14:25.453 Received shutdown signal, test time was about 10.000000 seconds 00:14:25.453 00:14:25.453 Latency(us) 00:14:25.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.453 =================================================================================================================== 00:14:25.453 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:25.453 21:28:50 -- common/autotest_common.sh@960 -- # wait 2603751 00:14:25.711 21:28:51 -- target/tls.sh@37 -- # return 1 00:14:25.711 21:28:51 -- common/autotest_common.sh@641 -- # es=1 00:14:25.711 21:28:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:25.711 21:28:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:25.711 21:28:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:25.711 21:28:51 -- target/tls.sh@158 -- # killprocess 2600230 00:14:25.711 21:28:51 -- common/autotest_common.sh@936 -- # '[' -z 2600230 ']' 00:14:25.711 21:28:51 -- common/autotest_common.sh@940 -- # kill -0 2600230 00:14:25.711 21:28:51 -- common/autotest_common.sh@941 -- # uname 00:14:25.711 21:28:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:25.711 21:28:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2600230 00:14:25.711 21:28:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:25.711 21:28:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:25.711 21:28:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2600230' 00:14:25.711 killing process with pid 2600230 00:14:25.711 21:28:51 -- common/autotest_common.sh@955 -- # kill 2600230 00:14:25.711 [2024-04-24 21:28:51.287861] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:25.711 21:28:51 -- common/autotest_common.sh@960 -- # wait 2600230 00:14:25.970 21:28:51 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:25.970 21:28:51 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:25.970 21:28:51 -- nvmf/common.sh@691 -- # local prefix key digest 00:14:25.970 21:28:51 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:14:25.970 21:28:51 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:25.970 21:28:51 -- nvmf/common.sh@693 -- # digest=2 00:14:25.970 21:28:51 -- nvmf/common.sh@694 -- # python - 00:14:25.970 21:28:51 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:25.970 21:28:51 -- target/tls.sh@160 -- # mktemp 00:14:25.970 21:28:51 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.vEaWycc1IQ 00:14:25.970 21:28:51 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:25.970 21:28:51 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.vEaWycc1IQ 00:14:25.970 21:28:51 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:25.970 21:28:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:25.970 21:28:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:25.970 21:28:51 -- common/autotest_common.sh@10 -- # set +x 00:14:25.970 21:28:51 -- nvmf/common.sh@470 -- # nvmfpid=2603903 00:14:25.970 21:28:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:25.970 21:28:51 -- nvmf/common.sh@471 -- # waitforlisten 2603903 00:14:25.970 21:28:51 -- common/autotest_common.sh@817 -- # '[' -z 2603903 ']' 00:14:25.970 21:28:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.970 21:28:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:25.970 21:28:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.970 21:28:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:25.970 21:28:51 -- common/autotest_common.sh@10 -- # set +x 00:14:26.228 [2024-04-24 21:28:51.680021] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:14:26.228 [2024-04-24 21:28:51.680120] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.228 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.228 [2024-04-24 21:28:51.744458] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.228 [2024-04-24 21:28:51.848370] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.228 [2024-04-24 21:28:51.848425] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.228 [2024-04-24 21:28:51.848449] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.228 [2024-04-24 21:28:51.848460] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.228 [2024-04-24 21:28:51.848470] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.228 [2024-04-24 21:28:51.848497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.487 21:28:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:26.487 21:28:51 -- common/autotest_common.sh@850 -- # return 0 00:14:26.487 21:28:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:26.487 21:28:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:26.487 21:28:51 -- common/autotest_common.sh@10 -- # set +x 00:14:26.487 21:28:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.487 21:28:51 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.vEaWycc1IQ 00:14:26.487 21:28:51 -- target/tls.sh@49 -- # local key=/tmp/tmp.vEaWycc1IQ 00:14:26.487 21:28:51 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:26.745 [2024-04-24 21:28:52.221007] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.745 21:28:52 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:27.004 21:28:52 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:27.263 [2024-04-24 21:28:52.754445] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:27.263 [2024-04-24 21:28:52.754727] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.263 21:28:52 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:27.521 malloc0 00:14:27.521 21:28:53 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:27.779 21:28:53 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vEaWycc1IQ 00:14:28.038 [2024-04-24 21:28:53.495991] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:28.038 21:28:53 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vEaWycc1IQ 00:14:28.038 21:28:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:28.038 21:28:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:28.038 21:28:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:28.038 21:28:53 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.vEaWycc1IQ' 00:14:28.038 21:28:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:28.038 21:28:53 -- target/tls.sh@28 -- # bdevperf_pid=2604181 00:14:28.038 21:28:53 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:28.038 21:28:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:28.038 21:28:53 -- target/tls.sh@31 -- # waitforlisten 2604181 /var/tmp/bdevperf.sock 00:14:28.038 21:28:53 -- common/autotest_common.sh@817 -- # '[' -z 2604181 ']' 00:14:28.038 21:28:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:28.038 21:28:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:28.038 21:28:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:28.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:28.038 21:28:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:28.038 21:28:53 -- common/autotest_common.sh@10 -- # set +x 00:14:28.038 [2024-04-24 21:28:53.548976] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:14:28.038 [2024-04-24 21:28:53.549040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604181 ] 00:14:28.038 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.038 [2024-04-24 21:28:53.605192] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.039 [2024-04-24 21:28:53.709526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.296 21:28:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:28.296 21:28:53 -- common/autotest_common.sh@850 -- # return 0 00:14:28.296 21:28:53 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vEaWycc1IQ 00:14:28.553 [2024-04-24 21:28:54.035778] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:28.553 [2024-04-24 21:28:54.035882] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:28.553 TLSTESTn1 00:14:28.553 21:28:54 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:28.553 Running I/O for 10 seconds... 00:14:40.750 00:14:40.750 Latency(us) 00:14:40.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.750 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:40.750 Verification LBA range: start 0x0 length 0x2000 00:14:40.750 TLSTESTn1 : 10.05 1140.49 4.46 0.00 0.00 111980.10 8349.77 118061.89 00:14:40.750 =================================================================================================================== 00:14:40.750 Total : 1140.49 4.46 0.00 0.00 111980.10 8349.77 118061.89 00:14:40.750 0 00:14:40.750 21:29:04 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:40.750 21:29:04 -- target/tls.sh@45 -- # killprocess 2604181 00:14:40.750 21:29:04 -- common/autotest_common.sh@936 -- # '[' -z 2604181 ']' 00:14:40.750 21:29:04 -- common/autotest_common.sh@940 -- # kill -0 2604181 00:14:40.750 21:29:04 -- common/autotest_common.sh@941 -- # uname 00:14:40.750 21:29:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:40.750 21:29:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2604181 00:14:40.750 21:29:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:40.750 21:29:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:40.750 21:29:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2604181' 00:14:40.750 killing process with pid 2604181 00:14:40.750 21:29:04 -- common/autotest_common.sh@955 -- # kill 2604181 00:14:40.750 Received shutdown signal, test time was about 10.000000 seconds 00:14:40.750 00:14:40.750 Latency(us) 00:14:40.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.750 =================================================================================================================== 00:14:40.750 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:40.750 [2024-04-24 21:29:04.336416] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:40.750 21:29:04 -- common/autotest_common.sh@960 -- # wait 2604181 00:14:40.750 21:29:04 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.vEaWycc1IQ 00:14:40.750 21:29:04 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vEaWycc1IQ 00:14:40.750 21:29:04 -- common/autotest_common.sh@638 -- # local es=0 00:14:40.750 21:29:04 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vEaWycc1IQ 00:14:40.750 21:29:04 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:40.750 21:29:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:40.750 21:29:04 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:40.750 21:29:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:40.750 21:29:04 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vEaWycc1IQ 00:14:40.750 21:29:04 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:40.750 21:29:04 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:40.750 21:29:04 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:40.750 21:29:04 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.vEaWycc1IQ' 00:14:40.750 21:29:04 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:40.750 21:29:04 -- target/tls.sh@28 -- # bdevperf_pid=2605458 00:14:40.750 21:29:04 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:40.750 21:29:04 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:40.750 21:29:04 -- target/tls.sh@31 -- # waitforlisten 2605458 /var/tmp/bdevperf.sock 00:14:40.750 21:29:04 -- common/autotest_common.sh@817 -- # '[' -z 2605458 ']' 00:14:40.750 21:29:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:40.750 21:29:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:40.750 21:29:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:40.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:40.750 21:29:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:40.750 21:29:04 -- common/autotest_common.sh@10 -- # set +x 00:14:40.750 [2024-04-24 21:29:04.644116] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:14:40.750 [2024-04-24 21:29:04.644211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2605458 ] 00:14:40.750 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.750 [2024-04-24 21:29:04.702798] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.750 [2024-04-24 21:29:04.805125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.750 21:29:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:40.750 21:29:04 -- common/autotest_common.sh@850 -- # return 0 00:14:40.750 21:29:04 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vEaWycc1IQ 00:14:40.750 [2024-04-24 21:29:05.187017] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:40.750 [2024-04-24 21:29:05.187093] bdev_nvme.c:6054:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:40.750 [2024-04-24 21:29:05.187113] bdev_nvme.c:6163:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.vEaWycc1IQ 00:14:40.750 request: 00:14:40.750 { 00:14:40.750 "name": "TLSTEST", 00:14:40.750 "trtype": "tcp", 00:14:40.750 "traddr": "10.0.0.2", 00:14:40.750 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:40.750 "adrfam": "ipv4", 00:14:40.750 "trsvcid": "4420", 00:14:40.750 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.750 "psk": "/tmp/tmp.vEaWycc1IQ", 00:14:40.750 "method": "bdev_nvme_attach_controller", 00:14:40.750 "req_id": 1 00:14:40.750 } 00:14:40.750 Got JSON-RPC error response 00:14:40.750 response: 00:14:40.750 { 00:14:40.750 "code": -1, 00:14:40.750 "message": "Operation not permitted" 00:14:40.750 } 00:14:40.750 21:29:05 -- target/tls.sh@36 -- # killprocess 2605458 00:14:40.750 21:29:05 -- common/autotest_common.sh@936 -- # '[' -z 2605458 ']' 00:14:40.750 21:29:05 -- common/autotest_common.sh@940 -- # kill -0 2605458 00:14:40.750 21:29:05 -- common/autotest_common.sh@941 -- # uname 00:14:40.750 21:29:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:40.751 21:29:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2605458 00:14:40.751 21:29:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:40.751 21:29:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:40.751 21:29:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2605458' 00:14:40.751 killing process with pid 2605458 00:14:40.751 21:29:05 -- common/autotest_common.sh@955 -- # kill 2605458 00:14:40.751 Received shutdown signal, test time was about 10.000000 seconds 00:14:40.751 00:14:40.751 Latency(us) 00:14:40.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.751 =================================================================================================================== 00:14:40.751 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:40.751 21:29:05 -- common/autotest_common.sh@960 -- # wait 2605458 00:14:40.751 21:29:05 -- target/tls.sh@37 -- # return 1 00:14:40.751 21:29:05 -- common/autotest_common.sh@641 -- # es=1 00:14:40.751 21:29:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:40.751 21:29:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:40.751 21:29:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:40.751 21:29:05 -- target/tls.sh@174 -- # killprocess 2603903 00:14:40.751 21:29:05 -- common/autotest_common.sh@936 -- # '[' -z 2603903 ']' 00:14:40.751 21:29:05 -- common/autotest_common.sh@940 -- # kill -0 2603903 00:14:40.751 21:29:05 -- common/autotest_common.sh@941 -- # uname 00:14:40.751 21:29:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:40.751 21:29:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2603903 00:14:40.751 21:29:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:40.751 21:29:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:40.751 21:29:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2603903' 00:14:40.751 killing process with pid 2603903 00:14:40.751 21:29:05 -- common/autotest_common.sh@955 -- # kill 2603903 00:14:40.751 [2024-04-24 21:29:05.523663] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:40.751 21:29:05 -- common/autotest_common.sh@960 -- # wait 2603903 00:14:40.751 21:29:05 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:40.751 21:29:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:40.751 21:29:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:40.751 21:29:05 -- common/autotest_common.sh@10 -- # set +x 00:14:40.751 21:29:05 -- nvmf/common.sh@470 -- # nvmfpid=2605647 00:14:40.751 21:29:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:40.751 21:29:05 -- nvmf/common.sh@471 -- # waitforlisten 2605647 00:14:40.751 21:29:05 -- common/autotest_common.sh@817 -- # '[' -z 2605647 ']' 00:14:40.751 21:29:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.751 21:29:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:40.751 21:29:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.751 21:29:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:40.751 21:29:05 -- common/autotest_common.sh@10 -- # set +x 00:14:40.751 [2024-04-24 21:29:05.851062] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:14:40.751 [2024-04-24 21:29:05.851145] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.751 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.751 [2024-04-24 21:29:05.914656] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.751 [2024-04-24 21:29:06.025390] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.751 [2024-04-24 21:29:06.025459] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.751 [2024-04-24 21:29:06.025485] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.751 [2024-04-24 21:29:06.025499] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.751 [2024-04-24 21:29:06.025511] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.751 [2024-04-24 21:29:06.025543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.317 21:29:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:41.317 21:29:06 -- common/autotest_common.sh@850 -- # return 0 00:14:41.317 21:29:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:41.318 21:29:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:41.318 21:29:06 -- common/autotest_common.sh@10 -- # set +x 00:14:41.318 21:29:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.318 21:29:06 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.vEaWycc1IQ 00:14:41.318 21:29:06 -- common/autotest_common.sh@638 -- # local es=0 00:14:41.318 21:29:06 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.vEaWycc1IQ 00:14:41.318 21:29:06 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:14:41.318 21:29:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:41.318 21:29:06 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:14:41.318 21:29:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:41.318 21:29:06 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.vEaWycc1IQ 00:14:41.318 21:29:06 -- target/tls.sh@49 -- # local key=/tmp/tmp.vEaWycc1IQ 00:14:41.318 21:29:06 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:41.576 [2024-04-24 21:29:07.032776] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.576 21:29:07 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:41.834 21:29:07 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:42.092 [2024-04-24 21:29:07.602279] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:42.092 [2024-04-24 21:29:07.602532] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.092 21:29:07 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:42.348 malloc0 00:14:42.349 21:29:07 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:42.606 21:29:08 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vEaWycc1IQ 00:14:42.864 [2024-04-24 21:29:08.353011] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:42.864 [2024-04-24 21:29:08.353056] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:42.864 [2024-04-24 21:29:08.353084] subsystem.c: 967:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:14:42.864 request: 00:14:42.864 { 00:14:42.864 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.864 "host": "nqn.2016-06.io.spdk:host1", 00:14:42.864 "psk": "/tmp/tmp.vEaWycc1IQ", 00:14:42.864 "method": "nvmf_subsystem_add_host", 00:14:42.864 "req_id": 1 00:14:42.864 } 00:14:42.864 Got JSON-RPC error response 00:14:42.864 response: 00:14:42.864 { 00:14:42.864 "code": -32603, 00:14:42.864 "message": "Internal error" 00:14:42.864 } 00:14:42.864 21:29:08 -- common/autotest_common.sh@641 -- # es=1 00:14:42.864 21:29:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:42.864 21:29:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:42.864 21:29:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:42.864 21:29:08 -- target/tls.sh@180 -- # killprocess 2605647 00:14:42.864 21:29:08 -- common/autotest_common.sh@936 -- # '[' -z 2605647 ']' 00:14:42.864 21:29:08 -- common/autotest_common.sh@940 -- # kill -0 2605647 00:14:42.864 21:29:08 -- common/autotest_common.sh@941 -- # uname 00:14:42.864 21:29:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:42.864 21:29:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2605647 00:14:42.864 21:29:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:42.864 21:29:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:42.864 21:29:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2605647' 00:14:42.864 killing process with pid 2605647 00:14:42.864 21:29:08 -- common/autotest_common.sh@955 -- # kill 2605647 00:14:42.864 21:29:08 -- common/autotest_common.sh@960 -- # wait 2605647 00:14:43.122 21:29:08 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.vEaWycc1IQ 00:14:43.122 21:29:08 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:43.122 21:29:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:43.122 21:29:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:43.122 21:29:08 -- common/autotest_common.sh@10 -- # set +x 00:14:43.122 21:29:08 -- nvmf/common.sh@470 -- # nvmfpid=2605952 00:14:43.122 21:29:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:43.122 21:29:08 -- nvmf/common.sh@471 -- # waitforlisten 2605952 00:14:43.122 21:29:08 -- common/autotest_common.sh@817 -- # '[' -z 2605952 ']' 00:14:43.122 21:29:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.122 21:29:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:43.122 21:29:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.122 21:29:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:43.122 21:29:08 -- common/autotest_common.sh@10 -- # set +x 00:14:43.122 [2024-04-24 21:29:08.730293] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:14:43.122 [2024-04-24 21:29:08.730372] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.122 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.122 [2024-04-24 21:29:08.796064] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.380 [2024-04-24 21:29:08.902404] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.380 [2024-04-24 21:29:08.902463] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.380 [2024-04-24 21:29:08.902477] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.380 [2024-04-24 21:29:08.902488] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.380 [2024-04-24 21:29:08.902498] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.380 [2024-04-24 21:29:08.902526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.380 21:29:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:43.380 21:29:09 -- common/autotest_common.sh@850 -- # return 0 00:14:43.380 21:29:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:43.380 21:29:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:43.380 21:29:09 -- common/autotest_common.sh@10 -- # set +x 00:14:43.380 21:29:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.380 21:29:09 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.vEaWycc1IQ 00:14:43.380 21:29:09 -- target/tls.sh@49 -- # local key=/tmp/tmp.vEaWycc1IQ 00:14:43.380 21:29:09 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:43.638 [2024-04-24 21:29:09.260907] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.638 21:29:09 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:43.895 21:29:09 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:44.154 [2024-04-24 21:29:09.782272] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:44.154 [2024-04-24 21:29:09.782531] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.154 21:29:09 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:44.412 malloc0 00:14:44.412 21:29:10 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:44.670 21:29:10 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vEaWycc1IQ 00:14:44.928 [2024-04-24 21:29:10.559908] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:44.928 21:29:10 -- target/tls.sh@188 -- # bdevperf_pid=2606237 00:14:44.928 21:29:10 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:44.928 21:29:10 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:44.928 21:29:10 -- target/tls.sh@191 -- # waitforlisten 2606237 /var/tmp/bdevperf.sock 00:14:44.928 21:29:10 -- common/autotest_common.sh@817 -- # '[' -z 2606237 ']' 00:14:44.928 21:29:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:44.928 21:29:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:44.928 21:29:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:44.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:44.928 21:29:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:44.928 21:29:10 -- common/autotest_common.sh@10 -- # set +x 00:14:45.186 [2024-04-24 21:29:10.623252] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:14:45.186 [2024-04-24 21:29:10.623337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2606237 ] 00:14:45.186 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.186 [2024-04-24 21:29:10.682873] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.186 [2024-04-24 21:29:10.790513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.444 21:29:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:45.444 21:29:10 -- common/autotest_common.sh@850 -- # return 0 00:14:45.444 21:29:10 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vEaWycc1IQ 00:14:45.701 [2024-04-24 21:29:11.156504] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:45.701 [2024-04-24 21:29:11.156635] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:45.701 TLSTESTn1 00:14:45.701 21:29:11 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:14:45.959 21:29:11 -- target/tls.sh@196 -- # tgtconf='{ 00:14:45.959 "subsystems": [ 00:14:45.959 { 00:14:45.959 "subsystem": "keyring", 00:14:45.959 "config": [] 00:14:45.959 }, 00:14:45.959 { 00:14:45.959 "subsystem": "iobuf", 00:14:45.959 "config": [ 00:14:45.959 { 00:14:45.959 "method": "iobuf_set_options", 00:14:45.959 "params": { 00:14:45.959 "small_pool_count": 8192, 00:14:45.959 "large_pool_count": 1024, 00:14:45.959 "small_bufsize": 8192, 00:14:45.959 "large_bufsize": 135168 00:14:45.959 } 00:14:45.959 } 00:14:45.959 ] 00:14:45.959 }, 00:14:45.959 { 00:14:45.959 "subsystem": "sock", 00:14:45.959 "config": [ 00:14:45.959 { 00:14:45.959 "method": "sock_impl_set_options", 00:14:45.959 "params": { 00:14:45.959 "impl_name": "posix", 00:14:45.959 "recv_buf_size": 2097152, 00:14:45.959 "send_buf_size": 2097152, 00:14:45.959 "enable_recv_pipe": true, 00:14:45.959 "enable_quickack": false, 00:14:45.959 "enable_placement_id": 0, 00:14:45.959 "enable_zerocopy_send_server": true, 00:14:45.959 "enable_zerocopy_send_client": false, 00:14:45.959 "zerocopy_threshold": 0, 00:14:45.959 "tls_version": 0, 00:14:45.959 "enable_ktls": false 00:14:45.959 } 00:14:45.959 }, 00:14:45.959 { 00:14:45.959 "method": "sock_impl_set_options", 00:14:45.959 "params": { 00:14:45.959 "impl_name": "ssl", 00:14:45.959 "recv_buf_size": 4096, 00:14:45.959 "send_buf_size": 4096, 00:14:45.959 "enable_recv_pipe": true, 00:14:45.959 "enable_quickack": false, 00:14:45.959 "enable_placement_id": 0, 00:14:45.959 "enable_zerocopy_send_server": true, 00:14:45.959 "enable_zerocopy_send_client": false, 00:14:45.959 "zerocopy_threshold": 0, 00:14:45.959 "tls_version": 0, 00:14:45.959 "enable_ktls": false 00:14:45.959 } 00:14:45.959 } 00:14:45.959 ] 00:14:45.959 }, 00:14:45.959 { 00:14:45.959 "subsystem": "vmd", 00:14:45.959 "config": [] 00:14:45.959 }, 00:14:45.959 { 00:14:45.959 "subsystem": "accel", 00:14:45.959 "config": [ 00:14:45.959 { 00:14:45.959 "method": "accel_set_options", 00:14:45.959 "params": { 00:14:45.959 "small_cache_size": 128, 00:14:45.959 "large_cache_size": 16, 00:14:45.959 "task_count": 2048, 00:14:45.959 "sequence_count": 2048, 00:14:45.959 "buf_count": 2048 00:14:45.959 } 00:14:45.959 } 00:14:45.959 ] 00:14:45.959 }, 00:14:45.959 { 00:14:45.959 "subsystem": "bdev", 00:14:45.959 "config": [ 00:14:45.959 { 00:14:45.959 "method": "bdev_set_options", 00:14:45.959 "params": { 00:14:45.959 "bdev_io_pool_size": 65535, 00:14:45.959 "bdev_io_cache_size": 256, 00:14:45.959 "bdev_auto_examine": true, 00:14:45.959 "iobuf_small_cache_size": 128, 00:14:45.959 "iobuf_large_cache_size": 16 00:14:45.959 } 00:14:45.959 }, 00:14:45.959 { 00:14:45.959 "method": "bdev_raid_set_options", 00:14:45.959 "params": { 00:14:45.959 "process_window_size_kb": 1024 00:14:45.959 } 00:14:45.959 }, 00:14:45.959 { 00:14:45.959 "method": "bdev_iscsi_set_options", 00:14:45.959 "params": { 00:14:45.959 "timeout_sec": 30 00:14:45.959 } 00:14:45.959 }, 00:14:45.959 { 00:14:45.959 "method": "bdev_nvme_set_options", 00:14:45.959 "params": { 00:14:45.959 "action_on_timeout": "none", 00:14:45.959 "timeout_us": 0, 00:14:45.959 "timeout_admin_us": 0, 00:14:45.959 "keep_alive_timeout_ms": 10000, 00:14:45.959 "arbitration_burst": 0, 00:14:45.959 "low_priority_weight": 0, 00:14:45.959 "medium_priority_weight": 0, 00:14:45.959 "high_priority_weight": 0, 00:14:45.959 "nvme_adminq_poll_period_us": 10000, 00:14:45.959 "nvme_ioq_poll_period_us": 0, 00:14:45.959 "io_queue_requests": 0, 00:14:45.959 "delay_cmd_submit": true, 00:14:45.959 "transport_retry_count": 4, 00:14:45.959 "bdev_retry_count": 3, 00:14:45.959 "transport_ack_timeout": 0, 00:14:45.959 "ctrlr_loss_timeout_sec": 0, 00:14:45.959 "reconnect_delay_sec": 0, 00:14:45.959 "fast_io_fail_timeout_sec": 0, 00:14:45.960 "disable_auto_failback": false, 00:14:45.960 "generate_uuids": false, 00:14:45.960 "transport_tos": 0, 00:14:45.960 "nvme_error_stat": false, 00:14:45.960 "rdma_srq_size": 0, 00:14:45.960 "io_path_stat": false, 00:14:45.960 "allow_accel_sequence": false, 00:14:45.960 "rdma_max_cq_size": 0, 00:14:45.960 "rdma_cm_event_timeout_ms": 0, 00:14:45.960 "dhchap_digests": [ 00:14:45.960 "sha256", 00:14:45.960 "sha384", 00:14:45.960 "sha512" 00:14:45.960 ], 00:14:45.960 "dhchap_dhgroups": [ 00:14:45.960 "null", 00:14:45.960 "ffdhe2048", 00:14:45.960 "ffdhe3072", 00:14:45.960 "ffdhe4096", 00:14:45.960 "ffdhe6144", 00:14:45.960 "ffdhe8192" 00:14:45.960 ] 00:14:45.960 } 00:14:45.960 }, 00:14:45.960 { 00:14:45.960 "method": "bdev_nvme_set_hotplug", 00:14:45.960 "params": { 00:14:45.960 "period_us": 100000, 00:14:45.960 "enable": false 00:14:45.960 } 00:14:45.960 }, 00:14:45.960 { 00:14:45.960 "method": "bdev_malloc_create", 00:14:45.960 "params": { 00:14:45.960 "name": "malloc0", 00:14:45.960 "num_blocks": 8192, 00:14:45.960 "block_size": 4096, 00:14:45.960 "physical_block_size": 4096, 00:14:45.960 "uuid": "211274d2-6644-49bd-b154-7c9cdc52c7da", 00:14:45.960 "optimal_io_boundary": 0 00:14:45.960 } 00:14:45.960 }, 00:14:45.960 { 00:14:45.960 "method": "bdev_wait_for_examine" 00:14:45.960 } 00:14:45.960 ] 00:14:45.960 }, 00:14:45.960 { 00:14:45.960 "subsystem": "nbd", 00:14:45.960 "config": [] 00:14:45.960 }, 00:14:45.960 { 00:14:45.960 "subsystem": "scheduler", 00:14:45.960 "config": [ 00:14:45.960 { 00:14:45.960 "method": "framework_set_scheduler", 00:14:45.960 "params": { 00:14:45.960 "name": "static" 00:14:45.960 } 00:14:45.960 } 00:14:45.960 ] 00:14:45.960 }, 00:14:45.960 { 00:14:45.960 "subsystem": "nvmf", 00:14:45.960 "config": [ 00:14:45.960 { 00:14:45.960 "method": "nvmf_set_config", 00:14:45.960 "params": { 00:14:45.960 "discovery_filter": "match_any", 00:14:45.960 "admin_cmd_passthru": { 00:14:45.960 "identify_ctrlr": false 00:14:45.960 } 00:14:45.960 } 00:14:45.960 }, 00:14:45.960 { 00:14:45.960 "method": "nvmf_set_max_subsystems", 00:14:45.960 "params": { 00:14:45.960 "max_subsystems": 1024 00:14:45.960 } 00:14:45.960 }, 00:14:45.960 { 00:14:45.960 "method": "nvmf_set_crdt", 00:14:45.960 "params": { 00:14:45.960 "crdt1": 0, 00:14:45.960 "crdt2": 0, 00:14:45.960 "crdt3": 0 00:14:45.960 } 00:14:45.960 }, 00:14:45.960 { 00:14:45.960 "method": "nvmf_create_transport", 00:14:45.960 "params": { 00:14:45.960 "trtype": "TCP", 00:14:45.960 "max_queue_depth": 128, 00:14:45.960 "max_io_qpairs_per_ctrlr": 127, 00:14:45.960 "in_capsule_data_size": 4096, 00:14:45.960 "max_io_size": 131072, 00:14:45.960 "io_unit_size": 131072, 00:14:45.960 "max_aq_depth": 128, 00:14:45.960 "num_shared_buffers": 511, 00:14:45.960 "buf_cache_size": 4294967295, 00:14:45.960 "dif_insert_or_strip": false, 00:14:45.960 "zcopy": false, 00:14:45.960 "c2h_success": false, 00:14:45.960 "sock_priority": 0, 00:14:45.960 "abort_timeout_sec": 1, 00:14:45.960 "ack_timeout": 0 00:14:45.960 } 00:14:45.960 }, 00:14:45.960 { 00:14:45.960 "method": "nvmf_create_subsystem", 00:14:45.960 "params": { 00:14:45.960 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.960 "allow_any_host": false, 00:14:45.960 "serial_number": "SPDK00000000000001", 00:14:45.960 "model_number": "SPDK bdev Controller", 00:14:45.960 "max_namespaces": 10, 00:14:45.960 "min_cntlid": 1, 00:14:45.960 "max_cntlid": 65519, 00:14:45.960 "ana_reporting": false 00:14:45.960 } 00:14:45.960 }, 00:14:45.960 { 00:14:45.960 "method": "nvmf_subsystem_add_host", 00:14:45.960 "params": { 00:14:45.960 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.960 "host": "nqn.2016-06.io.spdk:host1", 00:14:45.960 "psk": "/tmp/tmp.vEaWycc1IQ" 00:14:45.960 } 00:14:45.960 }, 00:14:45.960 { 00:14:45.960 "method": "nvmf_subsystem_add_ns", 00:14:45.960 "params": { 00:14:45.960 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.960 "namespace": { 00:14:45.960 "nsid": 1, 00:14:45.960 "bdev_name": "malloc0", 00:14:45.960 "nguid": "211274D2664449BDB1547C9CDC52C7DA", 00:14:45.960 "uuid": "211274d2-6644-49bd-b154-7c9cdc52c7da", 00:14:45.960 "no_auto_visible": false 00:14:45.960 } 00:14:45.960 } 00:14:45.960 }, 00:14:45.960 { 00:14:45.960 "method": "nvmf_subsystem_add_listener", 00:14:45.960 "params": { 00:14:45.960 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.960 "listen_address": { 00:14:45.960 "trtype": "TCP", 00:14:45.960 "adrfam": "IPv4", 00:14:45.960 "traddr": "10.0.0.2", 00:14:45.960 "trsvcid": "4420" 00:14:45.960 }, 00:14:45.960 "secure_channel": true 00:14:45.960 } 00:14:45.960 } 00:14:45.960 ] 00:14:45.960 } 00:14:45.960 ] 00:14:45.960 }' 00:14:45.960 21:29:11 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:46.526 21:29:11 -- target/tls.sh@197 -- # bdevperfconf='{ 00:14:46.526 "subsystems": [ 00:14:46.526 { 00:14:46.526 "subsystem": "keyring", 00:14:46.526 "config": [] 00:14:46.526 }, 00:14:46.526 { 00:14:46.526 "subsystem": "iobuf", 00:14:46.526 "config": [ 00:14:46.526 { 00:14:46.526 "method": "iobuf_set_options", 00:14:46.526 "params": { 00:14:46.526 "small_pool_count": 8192, 00:14:46.526 "large_pool_count": 1024, 00:14:46.526 "small_bufsize": 8192, 00:14:46.526 "large_bufsize": 135168 00:14:46.526 } 00:14:46.526 } 00:14:46.526 ] 00:14:46.526 }, 00:14:46.526 { 00:14:46.526 "subsystem": "sock", 00:14:46.526 "config": [ 00:14:46.526 { 00:14:46.526 "method": "sock_impl_set_options", 00:14:46.526 "params": { 00:14:46.526 "impl_name": "posix", 00:14:46.526 "recv_buf_size": 2097152, 00:14:46.526 "send_buf_size": 2097152, 00:14:46.526 "enable_recv_pipe": true, 00:14:46.526 "enable_quickack": false, 00:14:46.526 "enable_placement_id": 0, 00:14:46.526 "enable_zerocopy_send_server": true, 00:14:46.526 "enable_zerocopy_send_client": false, 00:14:46.526 "zerocopy_threshold": 0, 00:14:46.526 "tls_version": 0, 00:14:46.526 "enable_ktls": false 00:14:46.526 } 00:14:46.526 }, 00:14:46.526 { 00:14:46.526 "method": "sock_impl_set_options", 00:14:46.526 "params": { 00:14:46.526 "impl_name": "ssl", 00:14:46.526 "recv_buf_size": 4096, 00:14:46.526 "send_buf_size": 4096, 00:14:46.526 "enable_recv_pipe": true, 00:14:46.526 "enable_quickack": false, 00:14:46.526 "enable_placement_id": 0, 00:14:46.526 "enable_zerocopy_send_server": true, 00:14:46.526 "enable_zerocopy_send_client": false, 00:14:46.526 "zerocopy_threshold": 0, 00:14:46.526 "tls_version": 0, 00:14:46.526 "enable_ktls": false 00:14:46.526 } 00:14:46.526 } 00:14:46.526 ] 00:14:46.526 }, 00:14:46.526 { 00:14:46.526 "subsystem": "vmd", 00:14:46.526 "config": [] 00:14:46.526 }, 00:14:46.526 { 00:14:46.526 "subsystem": "accel", 00:14:46.526 "config": [ 00:14:46.526 { 00:14:46.526 "method": "accel_set_options", 00:14:46.526 "params": { 00:14:46.526 "small_cache_size": 128, 00:14:46.526 "large_cache_size": 16, 00:14:46.526 "task_count": 2048, 00:14:46.526 "sequence_count": 2048, 00:14:46.526 "buf_count": 2048 00:14:46.526 } 00:14:46.526 } 00:14:46.526 ] 00:14:46.526 }, 00:14:46.526 { 00:14:46.526 "subsystem": "bdev", 00:14:46.526 "config": [ 00:14:46.526 { 00:14:46.526 "method": "bdev_set_options", 00:14:46.526 "params": { 00:14:46.526 "bdev_io_pool_size": 65535, 00:14:46.526 "bdev_io_cache_size": 256, 00:14:46.526 "bdev_auto_examine": true, 00:14:46.526 "iobuf_small_cache_size": 128, 00:14:46.526 "iobuf_large_cache_size": 16 00:14:46.526 } 00:14:46.526 }, 00:14:46.526 { 00:14:46.526 "method": "bdev_raid_set_options", 00:14:46.526 "params": { 00:14:46.526 "process_window_size_kb": 1024 00:14:46.526 } 00:14:46.526 }, 00:14:46.526 { 00:14:46.526 "method": "bdev_iscsi_set_options", 00:14:46.526 "params": { 00:14:46.526 "timeout_sec": 30 00:14:46.526 } 00:14:46.526 }, 00:14:46.526 { 00:14:46.526 "method": "bdev_nvme_set_options", 00:14:46.526 "params": { 00:14:46.526 "action_on_timeout": "none", 00:14:46.526 "timeout_us": 0, 00:14:46.526 "timeout_admin_us": 0, 00:14:46.526 "keep_alive_timeout_ms": 10000, 00:14:46.526 "arbitration_burst": 0, 00:14:46.526 "low_priority_weight": 0, 00:14:46.526 "medium_priority_weight": 0, 00:14:46.526 "high_priority_weight": 0, 00:14:46.526 "nvme_adminq_poll_period_us": 10000, 00:14:46.526 "nvme_ioq_poll_period_us": 0, 00:14:46.526 "io_queue_requests": 512, 00:14:46.526 "delay_cmd_submit": true, 00:14:46.526 "transport_retry_count": 4, 00:14:46.526 "bdev_retry_count": 3, 00:14:46.526 "transport_ack_timeout": 0, 00:14:46.526 "ctrlr_loss_timeout_sec": 0, 00:14:46.526 "reconnect_delay_sec": 0, 00:14:46.526 "fast_io_fail_timeout_sec": 0, 00:14:46.526 "disable_auto_failback": false, 00:14:46.526 "generate_uuids": false, 00:14:46.526 "transport_tos": 0, 00:14:46.526 "nvme_error_stat": false, 00:14:46.526 "rdma_srq_size": 0, 00:14:46.526 "io_path_stat": false, 00:14:46.526 "allow_accel_sequence": false, 00:14:46.526 "rdma_max_cq_size": 0, 00:14:46.526 "rdma_cm_event_timeout_ms": 0, 00:14:46.526 "dhchap_digests": [ 00:14:46.526 "sha256", 00:14:46.526 "sha384", 00:14:46.526 "sha512" 00:14:46.526 ], 00:14:46.526 "dhchap_dhgroups": [ 00:14:46.526 "null", 00:14:46.526 "ffdhe2048", 00:14:46.526 "ffdhe3072", 00:14:46.526 "ffdhe4096", 00:14:46.526 "ffdhe6144", 00:14:46.526 "ffdhe8192" 00:14:46.526 ] 00:14:46.526 } 00:14:46.526 }, 00:14:46.526 { 00:14:46.527 "method": "bdev_nvme_attach_controller", 00:14:46.527 "params": { 00:14:46.527 "name": "TLSTEST", 00:14:46.527 "trtype": "TCP", 00:14:46.527 "adrfam": "IPv4", 00:14:46.527 "traddr": "10.0.0.2", 00:14:46.527 "trsvcid": "4420", 00:14:46.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.527 "prchk_reftag": false, 00:14:46.527 "prchk_guard": false, 00:14:46.527 "ctrlr_loss_timeout_sec": 0, 00:14:46.527 "reconnect_delay_sec": 0, 00:14:46.527 "fast_io_fail_timeout_sec": 0, 00:14:46.527 "psk": "/tmp/tmp.vEaWycc1IQ", 00:14:46.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:46.527 "hdgst": false, 00:14:46.527 "ddgst": false 00:14:46.527 } 00:14:46.527 }, 00:14:46.527 { 00:14:46.527 "method": "bdev_nvme_set_hotplug", 00:14:46.527 "params": { 00:14:46.527 "period_us": 100000, 00:14:46.527 "enable": false 00:14:46.527 } 00:14:46.527 }, 00:14:46.527 { 00:14:46.527 "method": "bdev_wait_for_examine" 00:14:46.527 } 00:14:46.527 ] 00:14:46.527 }, 00:14:46.527 { 00:14:46.527 "subsystem": "nbd", 00:14:46.527 "config": [] 00:14:46.527 } 00:14:46.527 ] 00:14:46.527 }' 00:14:46.527 21:29:11 -- target/tls.sh@199 -- # killprocess 2606237 00:14:46.527 21:29:11 -- common/autotest_common.sh@936 -- # '[' -z 2606237 ']' 00:14:46.527 21:29:11 -- common/autotest_common.sh@940 -- # kill -0 2606237 00:14:46.527 21:29:11 -- common/autotest_common.sh@941 -- # uname 00:14:46.527 21:29:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:46.527 21:29:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2606237 00:14:46.527 21:29:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:46.527 21:29:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:46.527 21:29:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2606237' 00:14:46.527 killing process with pid 2606237 00:14:46.527 21:29:11 -- common/autotest_common.sh@955 -- # kill 2606237 00:14:46.527 Received shutdown signal, test time was about 10.000000 seconds 00:14:46.527 00:14:46.527 Latency(us) 00:14:46.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.527 =================================================================================================================== 00:14:46.527 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:46.527 [2024-04-24 21:29:11.998810] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:46.527 21:29:11 -- common/autotest_common.sh@960 -- # wait 2606237 00:14:46.785 21:29:12 -- target/tls.sh@200 -- # killprocess 2605952 00:14:46.785 21:29:12 -- common/autotest_common.sh@936 -- # '[' -z 2605952 ']' 00:14:46.785 21:29:12 -- common/autotest_common.sh@940 -- # kill -0 2605952 00:14:46.785 21:29:12 -- common/autotest_common.sh@941 -- # uname 00:14:46.785 21:29:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:46.785 21:29:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2605952 00:14:46.785 21:29:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:46.785 21:29:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:46.785 21:29:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2605952' 00:14:46.785 killing process with pid 2605952 00:14:46.785 21:29:12 -- common/autotest_common.sh@955 -- # kill 2605952 00:14:46.785 [2024-04-24 21:29:12.292426] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:46.785 21:29:12 -- common/autotest_common.sh@960 -- # wait 2605952 00:14:47.044 21:29:12 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:47.044 21:29:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:47.044 21:29:12 -- target/tls.sh@203 -- # echo '{ 00:14:47.044 "subsystems": [ 00:14:47.044 { 00:14:47.044 "subsystem": "keyring", 00:14:47.044 "config": [] 00:14:47.044 }, 00:14:47.044 { 00:14:47.044 "subsystem": "iobuf", 00:14:47.044 "config": [ 00:14:47.044 { 00:14:47.044 "method": "iobuf_set_options", 00:14:47.044 "params": { 00:14:47.044 "small_pool_count": 8192, 00:14:47.044 "large_pool_count": 1024, 00:14:47.044 "small_bufsize": 8192, 00:14:47.044 "large_bufsize": 135168 00:14:47.044 } 00:14:47.044 } 00:14:47.044 ] 00:14:47.044 }, 00:14:47.044 { 00:14:47.044 "subsystem": "sock", 00:14:47.044 "config": [ 00:14:47.044 { 00:14:47.044 "method": "sock_impl_set_options", 00:14:47.044 "params": { 00:14:47.044 "impl_name": "posix", 00:14:47.044 "recv_buf_size": 2097152, 00:14:47.044 "send_buf_size": 2097152, 00:14:47.044 "enable_recv_pipe": true, 00:14:47.044 "enable_quickack": false, 00:14:47.044 "enable_placement_id": 0, 00:14:47.044 "enable_zerocopy_send_server": true, 00:14:47.044 "enable_zerocopy_send_client": false, 00:14:47.044 "zerocopy_threshold": 0, 00:14:47.044 "tls_version": 0, 00:14:47.044 "enable_ktls": false 00:14:47.044 } 00:14:47.044 }, 00:14:47.044 { 00:14:47.044 "method": "sock_impl_set_options", 00:14:47.044 "params": { 00:14:47.044 "impl_name": "ssl", 00:14:47.044 "recv_buf_size": 4096, 00:14:47.044 "send_buf_size": 4096, 00:14:47.044 "enable_recv_pipe": true, 00:14:47.044 "enable_quickack": false, 00:14:47.044 "enable_placement_id": 0, 00:14:47.044 "enable_zerocopy_send_server": true, 00:14:47.044 "enable_zerocopy_send_client": false, 00:14:47.044 "zerocopy_threshold": 0, 00:14:47.044 "tls_version": 0, 00:14:47.044 "enable_ktls": false 00:14:47.044 } 00:14:47.044 } 00:14:47.044 ] 00:14:47.044 }, 00:14:47.044 { 00:14:47.044 "subsystem": "vmd", 00:14:47.044 "config": [] 00:14:47.044 }, 00:14:47.044 { 00:14:47.044 "subsystem": "accel", 00:14:47.044 "config": [ 00:14:47.044 { 00:14:47.044 "method": "accel_set_options", 00:14:47.044 "params": { 00:14:47.044 "small_cache_size": 128, 00:14:47.044 "large_cache_size": 16, 00:14:47.044 "task_count": 2048, 00:14:47.044 "sequence_count": 2048, 00:14:47.044 "buf_count": 2048 00:14:47.044 } 00:14:47.044 } 00:14:47.044 ] 00:14:47.044 }, 00:14:47.044 { 00:14:47.044 "subsystem": "bdev", 00:14:47.044 "config": [ 00:14:47.044 { 00:14:47.044 "method": "bdev_set_options", 00:14:47.044 "params": { 00:14:47.044 "bdev_io_pool_size": 65535, 00:14:47.045 "bdev_io_cache_size": 256, 00:14:47.045 "bdev_auto_examine": true, 00:14:47.045 "iobuf_small_cache_size": 128, 00:14:47.045 "iobuf_large_cache_size": 16 00:14:47.045 } 00:14:47.045 }, 00:14:47.045 { 00:14:47.045 "method": "bdev_raid_set_options", 00:14:47.045 "params": { 00:14:47.045 "process_window_size_kb": 1024 00:14:47.045 } 00:14:47.045 }, 00:14:47.045 { 00:14:47.045 "method": "bdev_iscsi_set_options", 00:14:47.045 "params": { 00:14:47.045 "timeout_sec": 30 00:14:47.045 } 00:14:47.045 }, 00:14:47.045 { 00:14:47.045 "method": "bdev_nvme_set_options", 00:14:47.045 "params": { 00:14:47.045 "action_on_timeout": "none", 00:14:47.045 "timeout_us": 0, 00:14:47.045 "timeout_admin_us": 0, 00:14:47.045 "keep_alive_timeout_ms": 10000, 00:14:47.045 "arbitration_burst": 0, 00:14:47.045 "low_priority_weight": 0, 00:14:47.045 "medium_priority_weight": 0, 00:14:47.045 "high_priority_weight": 0, 00:14:47.045 "nvme_adminq_poll_period_us": 10000, 00:14:47.045 "nvme_ioq_poll_period_us": 0, 00:14:47.045 "io_queue_requests": 0, 00:14:47.045 "delay_cmd_submit": true, 00:14:47.045 "transport_retry_count": 4, 00:14:47.045 "bdev_retry_count": 3, 00:14:47.045 "transport_ack_timeout": 0, 00:14:47.045 "ctrlr_loss_timeout_sec": 0, 00:14:47.045 "reconnect_delay_sec": 0, 00:14:47.045 "fast_io_fail_timeout_sec": 0, 00:14:47.045 "disable_auto_failback": false, 00:14:47.045 "generate_uuids": false, 00:14:47.045 "transport_tos": 0, 00:14:47.045 "nvme_error_stat": false, 00:14:47.045 "rdma_srq_size": 0, 00:14:47.045 "io_path_stat": false, 00:14:47.045 "allow_accel_sequence": false, 00:14:47.045 "rdma_max_cq_size": 0, 00:14:47.045 "rdma_cm_event_timeout_ms": 0, 00:14:47.045 "dhchap_digests": [ 00:14:47.045 "sha256", 00:14:47.045 "sha384", 00:14:47.045 "sha512" 00:14:47.045 ], 00:14:47.045 "dhchap_dhgroups": [ 00:14:47.045 "null", 00:14:47.045 "ffdhe2048", 00:14:47.045 "ffdhe3072", 00:14:47.045 "ffdhe4096", 00:14:47.045 "ffdhe6144", 00:14:47.045 "ffdhe8192" 00:14:47.045 ] 00:14:47.045 } 00:14:47.045 }, 00:14:47.045 { 00:14:47.045 "method": "bdev_nvme_set_hotplug", 00:14:47.045 "params": { 00:14:47.045 "period_us": 100000, 00:14:47.045 "enable": false 00:14:47.045 } 00:14:47.045 }, 00:14:47.045 { 00:14:47.045 "method": "bdev_malloc_create", 00:14:47.045 "params": { 00:14:47.045 "name": "malloc0", 00:14:47.045 "num_blocks": 8192, 00:14:47.045 "block_size": 4096, 00:14:47.045 "physical_block_size": 4096, 00:14:47.045 "uuid": "211274d2-6644-49bd-b154-7c9cdc52c7da", 00:14:47.045 "optimal_io_boundary": 0 00:14:47.045 } 00:14:47.045 }, 00:14:47.045 { 00:14:47.045 "method": "bdev_wait_for_examine" 00:14:47.045 } 00:14:47.045 ] 00:14:47.045 }, 00:14:47.045 { 00:14:47.045 "subsystem": "nbd", 00:14:47.045 "config": [] 00:14:47.045 }, 00:14:47.045 { 00:14:47.045 "subsystem": "scheduler", 00:14:47.045 "config": [ 00:14:47.045 { 00:14:47.045 "method": "framework_set_scheduler", 00:14:47.045 "params": { 00:14:47.045 "name": "static" 00:14:47.045 } 00:14:47.045 } 00:14:47.045 ] 00:14:47.045 }, 00:14:47.045 { 00:14:47.045 "subsystem": "nvmf", 00:14:47.045 "config": [ 00:14:47.045 { 00:14:47.045 "method": "nvmf_set_config", 00:14:47.045 "params": { 00:14:47.045 "discovery_filter": "match_any", 00:14:47.045 "admin_cmd_passthru": { 00:14:47.045 "identify_ctrlr": false 00:14:47.045 } 00:14:47.045 } 00:14:47.045 }, 00:14:47.045 { 00:14:47.045 "method": "nvmf_set_max_subsystems", 00:14:47.045 "params": { 00:14:47.045 "max_subsystems": 1024 00:14:47.045 } 00:14:47.045 }, 00:14:47.045 { 00:14:47.045 "method": "nvmf_set_crdt", 00:14:47.045 "params": { 00:14:47.045 "crdt1": 0, 00:14:47.045 "crdt2": 0, 00:14:47.045 "crdt3": 0 00:14:47.045 } 00:14:47.045 }, 00:14:47.045 { 00:14:47.045 "method": "nvmf_create_transport", 00:14:47.045 "params": { 00:14:47.045 "trtype": "TCP", 00:14:47.045 "max_queue_depth": 128, 00:14:47.045 "max_io_qpairs_per_ctrlr": 127, 00:14:47.045 "in_capsule_data_size": 4096, 00:14:47.045 "max_io_size": 131072, 00:14:47.045 "io_unit_size": 131072, 00:14:47.045 "max_aq_depth": 128, 00:14:47.045 "num_shared_buffers": 511, 00:14:47.045 "buf_cache_size": 4294967295, 00:14:47.045 "dif_insert_or_strip": false, 00:14:47.045 "zcopy": false, 00:14:47.045 "c2h_success": false, 00:14:47.045 "sock_priority": 0, 00:14:47.045 "abort_timeout_sec": 1, 00:14:47.045 "ack_timeout": 0 00:14:47.045 } 00:14:47.045 }, 00:14:47.045 { 00:14:47.045 "method": "nvmf_create_subsystem", 00:14:47.045 "params": { 00:14:47.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.045 "allow_any_host": false, 00:14:47.045 "serial_number": "SPDK00000000000001", 00:14:47.045 "model_number": "SPDK bdev Controller", 00:14:47.045 "max_namespaces": 10, 00:14:47.045 "min_cntlid": 1, 00:14:47.045 "max_cntlid": 65519, 00:14:47.045 "ana_reporting": false 00:14:47.045 } 00:14:47.045 }, 00:14:47.045 { 00:14:47.045 "method": "nvmf_subsystem_add_host", 00:14:47.045 "params": { 00:14:47.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.045 "host": "nqn.2016-06.io.spdk:host1", 00:14:47.045 "psk": "/tmp/tmp.vEaWycc1IQ" 00:14:47.045 } 00:14:47.045 }, 00:14:47.045 { 00:14:47.045 "method": "nvmf_subsystem_add_ns", 00:14:47.045 "params": { 00:14:47.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.045 "namespace": { 00:14:47.045 "nsid": 1, 00:14:47.045 "bdev_name": "malloc0", 00:14:47.045 "nguid": "211274D2664449BDB1547C9CDC52C7DA", 00:14:47.045 "uuid": "211274d2-6644-49bd-b154-7c9cdc52c7da", 00:14:47.045 "no_auto_visible": false 00:14:47.045 } 00:14:47.045 } 00:14:47.045 }, 00:14:47.045 { 00:14:47.045 "method": "nvmf_subsystem_add_listener", 00:14:47.045 "params": { 00:14:47.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.045 "listen_address": { 00:14:47.045 "trtype": "TCP", 00:14:47.045 "adrfam": "IPv4", 00:14:47.045 "traddr": "10.0.0.2", 00:14:47.045 "trsvcid": "4420" 00:14:47.045 }, 00:14:47.045 "secure_channel": true 00:14:47.045 } 00:14:47.045 } 00:14:47.045 ] 00:14:47.045 } 00:14:47.045 ] 00:14:47.045 }' 00:14:47.045 21:29:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:47.045 21:29:12 -- common/autotest_common.sh@10 -- # set +x 00:14:47.045 21:29:12 -- nvmf/common.sh@470 -- # nvmfpid=2606515 00:14:47.045 21:29:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:47.045 21:29:12 -- nvmf/common.sh@471 -- # waitforlisten 2606515 00:14:47.045 21:29:12 -- common/autotest_common.sh@817 -- # '[' -z 2606515 ']' 00:14:47.045 21:29:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.045 21:29:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:47.045 21:29:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.045 21:29:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:47.045 21:29:12 -- common/autotest_common.sh@10 -- # set +x 00:14:47.045 [2024-04-24 21:29:12.641157] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:14:47.045 [2024-04-24 21:29:12.641245] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.045 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.045 [2024-04-24 21:29:12.710300] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.304 [2024-04-24 21:29:12.821132] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.304 [2024-04-24 21:29:12.821204] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.304 [2024-04-24 21:29:12.821229] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.304 [2024-04-24 21:29:12.821243] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.304 [2024-04-24 21:29:12.821254] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.304 [2024-04-24 21:29:12.821355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.571 [2024-04-24 21:29:13.054315] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.571 [2024-04-24 21:29:13.070249] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:47.571 [2024-04-24 21:29:13.086310] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:47.571 [2024-04-24 21:29:13.096841] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.180 21:29:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:48.180 21:29:13 -- common/autotest_common.sh@850 -- # return 0 00:14:48.180 21:29:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:48.181 21:29:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:48.181 21:29:13 -- common/autotest_common.sh@10 -- # set +x 00:14:48.181 21:29:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.181 21:29:13 -- target/tls.sh@207 -- # bdevperf_pid=2606563 00:14:48.181 21:29:13 -- target/tls.sh@208 -- # waitforlisten 2606563 /var/tmp/bdevperf.sock 00:14:48.181 21:29:13 -- common/autotest_common.sh@817 -- # '[' -z 2606563 ']' 00:14:48.181 21:29:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:48.181 21:29:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:48.181 21:29:13 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:48.181 21:29:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:48.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:48.181 21:29:13 -- target/tls.sh@204 -- # echo '{ 00:14:48.181 "subsystems": [ 00:14:48.181 { 00:14:48.181 "subsystem": "keyring", 00:14:48.181 "config": [] 00:14:48.181 }, 00:14:48.181 { 00:14:48.181 "subsystem": "iobuf", 00:14:48.181 "config": [ 00:14:48.181 { 00:14:48.181 "method": "iobuf_set_options", 00:14:48.181 "params": { 00:14:48.181 "small_pool_count": 8192, 00:14:48.181 "large_pool_count": 1024, 00:14:48.181 "small_bufsize": 8192, 00:14:48.181 "large_bufsize": 135168 00:14:48.181 } 00:14:48.181 } 00:14:48.181 ] 00:14:48.181 }, 00:14:48.181 { 00:14:48.181 "subsystem": "sock", 00:14:48.181 "config": [ 00:14:48.181 { 00:14:48.181 "method": "sock_impl_set_options", 00:14:48.181 "params": { 00:14:48.181 "impl_name": "posix", 00:14:48.181 "recv_buf_size": 2097152, 00:14:48.181 "send_buf_size": 2097152, 00:14:48.181 "enable_recv_pipe": true, 00:14:48.181 "enable_quickack": false, 00:14:48.181 "enable_placement_id": 0, 00:14:48.181 "enable_zerocopy_send_server": true, 00:14:48.181 "enable_zerocopy_send_client": false, 00:14:48.181 "zerocopy_threshold": 0, 00:14:48.181 "tls_version": 0, 00:14:48.181 "enable_ktls": false 00:14:48.181 } 00:14:48.181 }, 00:14:48.181 { 00:14:48.181 "method": "sock_impl_set_options", 00:14:48.181 "params": { 00:14:48.181 "impl_name": "ssl", 00:14:48.181 "recv_buf_size": 4096, 00:14:48.181 "send_buf_size": 4096, 00:14:48.181 "enable_recv_pipe": true, 00:14:48.181 "enable_quickack": false, 00:14:48.181 "enable_placement_id": 0, 00:14:48.181 "enable_zerocopy_send_server": true, 00:14:48.181 "enable_zerocopy_send_client": false, 00:14:48.181 "zerocopy_threshold": 0, 00:14:48.181 "tls_version": 0, 00:14:48.181 "enable_ktls": false 00:14:48.181 } 00:14:48.181 } 00:14:48.181 ] 00:14:48.181 }, 00:14:48.181 { 00:14:48.181 "subsystem": "vmd", 00:14:48.181 "config": [] 00:14:48.181 }, 00:14:48.181 { 00:14:48.181 "subsystem": "accel", 00:14:48.181 "config": [ 00:14:48.181 { 00:14:48.181 "method": "accel_set_options", 00:14:48.181 "params": { 00:14:48.181 "small_cache_size": 128, 00:14:48.181 "large_cache_size": 16, 00:14:48.181 "task_count": 2048, 00:14:48.181 "sequence_count": 2048, 00:14:48.181 "buf_count": 2048 00:14:48.181 } 00:14:48.181 } 00:14:48.181 ] 00:14:48.181 }, 00:14:48.181 { 00:14:48.181 "subsystem": "bdev", 00:14:48.181 "config": [ 00:14:48.181 { 00:14:48.181 "method": "bdev_set_options", 00:14:48.181 "params": { 00:14:48.181 "bdev_io_pool_size": 65535, 00:14:48.181 "bdev_io_cache_size": 256, 00:14:48.181 "bdev_auto_examine": true, 00:14:48.181 "iobuf_small_cache_size": 128, 00:14:48.181 "iobuf_large_cache_size": 16 00:14:48.181 } 00:14:48.181 }, 00:14:48.181 { 00:14:48.181 "method": "bdev_raid_set_options", 00:14:48.181 "params": { 00:14:48.181 "process_window_size_kb": 1024 00:14:48.181 } 00:14:48.181 }, 00:14:48.181 { 00:14:48.181 "method": "bdev_iscsi_set_options", 00:14:48.181 "params": { 00:14:48.181 "timeout_sec": 30 00:14:48.181 } 00:14:48.181 }, 00:14:48.181 { 00:14:48.181 "method": "bdev_nvme_set_options", 00:14:48.181 "params": { 00:14:48.181 "action_on_timeout": "none", 00:14:48.181 "timeout_us": 0, 00:14:48.181 "timeout_admin_us": 0, 00:14:48.181 "keep_alive_timeout_ms": 10000, 00:14:48.181 "arbitration_burst": 0, 00:14:48.181 "low_priority_weight": 0, 00:14:48.181 "medium_priority_weight": 0, 00:14:48.181 "high_priority_weight": 0, 00:14:48.181 "nvme_adminq_poll_period_us": 10000, 00:14:48.181 "nvme_ioq_poll_period_us": 0, 00:14:48.181 "io_queue_requests": 512, 00:14:48.181 "delay_cmd_submit": true, 00:14:48.181 "transport_retry_count": 4, 00:14:48.181 "bdev_retry_count": 3, 00:14:48.181 "transport_ack_timeout": 0, 00:14:48.181 "ctrlr_loss_timeout_sec": 0, 00:14:48.181 "reconnect_delay_sec": 0, 00:14:48.181 "fast_io_fail_timeout_sec": 0, 00:14:48.181 "disable_auto_failback": false, 00:14:48.181 "generate_uuids": false, 00:14:48.181 "transport_tos": 0, 00:14:48.181 "nvme_error_stat": false, 00:14:48.181 "rdma_srq_size": 0, 00:14:48.181 "io_path_stat": false, 00:14:48.181 "allow_accel_sequence": false, 00:14:48.181 "rdma_max_cq_size": 0, 00:14:48.181 "rdma_cm_event_timeout_ms": 0, 00:14:48.181 "dhchap_digests": [ 00:14:48.181 "sha256", 00:14:48.181 "sha384", 00:14:48.181 "sha512" 00:14:48.181 ], 00:14:48.181 "dhchap_dhgroups": [ 00:14:48.181 "null", 00:14:48.181 "ffdhe2048", 00:14:48.181 "ffdhe3072", 00:14:48.181 "ffdhe4096", 00:14:48.181 "ffdhe6144", 00:14:48.181 "ffdhe8192" 00:14:48.181 ] 00:14:48.181 } 00:14:48.181 }, 00:14:48.181 { 00:14:48.181 "method": "bdev_nvme_attach_controller", 00:14:48.181 "params": { 00:14:48.181 "name": "TLSTEST", 00:14:48.181 "trtype": "TCP", 00:14:48.181 "adrfam": "IPv4", 00:14:48.181 "traddr": "10.0.0.2", 00:14:48.181 "trsvcid": "4420", 00:14:48.181 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.181 "prchk_reftag": false, 00:14:48.181 "prchk_guard": false, 00:14:48.181 "ctrlr_loss_timeout_sec": 0, 00:14:48.181 "reconnect_delay_sec": 0, 00:14:48.181 "fast_io_fail_timeout_sec": 0, 00:14:48.181 "psk": "/tmp/tmp.vEaWycc1IQ", 00:14:48.181 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:48.181 "hdgst": false, 00:14:48.181 "ddgst": false 00:14:48.181 } 00:14:48.181 }, 00:14:48.181 { 00:14:48.181 "method": "bdev_nvme_set_hotplug", 00:14:48.181 "params": { 00:14:48.181 "period_us": 100000, 00:14:48.181 "enable": false 00:14:48.181 } 00:14:48.181 }, 00:14:48.181 { 00:14:48.181 "method": "bdev_wait_for_examine" 00:14:48.181 } 00:14:48.181 ] 00:14:48.181 }, 00:14:48.181 { 00:14:48.181 "subsystem": "nbd", 00:14:48.181 "config": [] 00:14:48.181 } 00:14:48.181 ] 00:14:48.181 }' 00:14:48.181 21:29:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:48.181 21:29:13 -- common/autotest_common.sh@10 -- # set +x 00:14:48.181 [2024-04-24 21:29:13.630897] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:14:48.181 [2024-04-24 21:29:13.631003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2606563 ] 00:14:48.181 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.181 [2024-04-24 21:29:13.692268] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.181 [2024-04-24 21:29:13.797480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.438 [2024-04-24 21:29:13.960009] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:48.438 [2024-04-24 21:29:13.960127] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:49.003 21:29:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:49.003 21:29:14 -- common/autotest_common.sh@850 -- # return 0 00:14:49.003 21:29:14 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:49.263 Running I/O for 10 seconds... 00:14:59.230 00:14:59.230 Latency(us) 00:14:59.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.230 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:59.230 Verification LBA range: start 0x0 length 0x2000 00:14:59.230 TLSTESTn1 : 10.07 1434.19 5.60 0.00 0.00 88962.87 9709.04 119615.34 00:14:59.230 =================================================================================================================== 00:14:59.230 Total : 1434.19 5.60 0.00 0.00 88962.87 9709.04 119615.34 00:14:59.230 0 00:14:59.230 21:29:24 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:59.230 21:29:24 -- target/tls.sh@214 -- # killprocess 2606563 00:14:59.230 21:29:24 -- common/autotest_common.sh@936 -- # '[' -z 2606563 ']' 00:14:59.230 21:29:24 -- common/autotest_common.sh@940 -- # kill -0 2606563 00:14:59.230 21:29:24 -- common/autotest_common.sh@941 -- # uname 00:14:59.230 21:29:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:59.230 21:29:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2606563 00:14:59.230 21:29:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:59.230 21:29:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:59.230 21:29:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2606563' 00:14:59.230 killing process with pid 2606563 00:14:59.230 21:29:24 -- common/autotest_common.sh@955 -- # kill 2606563 00:14:59.230 Received shutdown signal, test time was about 10.000000 seconds 00:14:59.230 00:14:59.230 Latency(us) 00:14:59.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.230 =================================================================================================================== 00:14:59.230 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:59.230 [2024-04-24 21:29:24.842373] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:59.230 21:29:24 -- common/autotest_common.sh@960 -- # wait 2606563 00:14:59.489 21:29:25 -- target/tls.sh@215 -- # killprocess 2606515 00:14:59.489 21:29:25 -- common/autotest_common.sh@936 -- # '[' -z 2606515 ']' 00:14:59.489 21:29:25 -- common/autotest_common.sh@940 -- # kill -0 2606515 00:14:59.489 21:29:25 -- common/autotest_common.sh@941 -- # uname 00:14:59.489 21:29:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:59.489 21:29:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2606515 00:14:59.489 21:29:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:59.489 21:29:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:59.489 21:29:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2606515' 00:14:59.489 killing process with pid 2606515 00:14:59.489 21:29:25 -- common/autotest_common.sh@955 -- # kill 2606515 00:14:59.489 [2024-04-24 21:29:25.128470] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:59.489 21:29:25 -- common/autotest_common.sh@960 -- # wait 2606515 00:14:59.748 21:29:25 -- target/tls.sh@218 -- # nvmfappstart 00:14:59.748 21:29:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:59.748 21:29:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:59.748 21:29:25 -- common/autotest_common.sh@10 -- # set +x 00:15:00.007 21:29:25 -- nvmf/common.sh@470 -- # nvmfpid=2608003 00:15:00.007 21:29:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:00.007 21:29:25 -- nvmf/common.sh@471 -- # waitforlisten 2608003 00:15:00.007 21:29:25 -- common/autotest_common.sh@817 -- # '[' -z 2608003 ']' 00:15:00.007 21:29:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.007 21:29:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:00.007 21:29:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.007 21:29:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:00.007 21:29:25 -- common/autotest_common.sh@10 -- # set +x 00:15:00.007 [2024-04-24 21:29:25.478337] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:15:00.007 [2024-04-24 21:29:25.478432] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.007 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.007 [2024-04-24 21:29:25.547564] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.007 [2024-04-24 21:29:25.658178] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.007 [2024-04-24 21:29:25.658258] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.007 [2024-04-24 21:29:25.658285] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.007 [2024-04-24 21:29:25.658299] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.007 [2024-04-24 21:29:25.658311] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.007 [2024-04-24 21:29:25.658344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.943 21:29:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:00.943 21:29:26 -- common/autotest_common.sh@850 -- # return 0 00:15:00.943 21:29:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:00.943 21:29:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:00.943 21:29:26 -- common/autotest_common.sh@10 -- # set +x 00:15:00.944 21:29:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.944 21:29:26 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.vEaWycc1IQ 00:15:00.944 21:29:26 -- target/tls.sh@49 -- # local key=/tmp/tmp.vEaWycc1IQ 00:15:00.944 21:29:26 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:01.201 [2024-04-24 21:29:26.648393] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.201 21:29:26 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:01.459 21:29:26 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:01.718 [2024-04-24 21:29:27.177806] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:01.718 [2024-04-24 21:29:27.178049] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.718 21:29:27 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:01.975 malloc0 00:15:01.975 21:29:27 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:02.234 21:29:27 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vEaWycc1IQ 00:15:02.234 [2024-04-24 21:29:27.910431] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:02.494 21:29:27 -- target/tls.sh@222 -- # bdevperf_pid=2608294 00:15:02.494 21:29:27 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:02.494 21:29:27 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:02.494 21:29:27 -- target/tls.sh@225 -- # waitforlisten 2608294 /var/tmp/bdevperf.sock 00:15:02.494 21:29:27 -- common/autotest_common.sh@817 -- # '[' -z 2608294 ']' 00:15:02.494 21:29:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:02.494 21:29:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:02.494 21:29:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:02.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:02.494 21:29:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:02.494 21:29:27 -- common/autotest_common.sh@10 -- # set +x 00:15:02.494 [2024-04-24 21:29:27.976314] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:15:02.494 [2024-04-24 21:29:27.976401] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2608294 ] 00:15:02.494 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.494 [2024-04-24 21:29:28.038822] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.494 [2024-04-24 21:29:28.152515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.752 21:29:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:02.752 21:29:28 -- common/autotest_common.sh@850 -- # return 0 00:15:02.752 21:29:28 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vEaWycc1IQ 00:15:03.011 21:29:28 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:03.270 [2024-04-24 21:29:28.760361] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:03.270 nvme0n1 00:15:03.270 21:29:28 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:03.528 Running I/O for 1 seconds... 00:15:04.468 00:15:04.468 Latency(us) 00:15:04.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.468 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:04.468 Verification LBA range: start 0x0 length 0x2000 00:15:04.468 nvme0n1 : 1.06 1144.06 4.47 0.00 0.00 109627.86 11213.94 124275.67 00:15:04.469 =================================================================================================================== 00:15:04.469 Total : 1144.06 4.47 0.00 0.00 109627.86 11213.94 124275.67 00:15:04.469 0 00:15:04.469 21:29:30 -- target/tls.sh@234 -- # killprocess 2608294 00:15:04.469 21:29:30 -- common/autotest_common.sh@936 -- # '[' -z 2608294 ']' 00:15:04.469 21:29:30 -- common/autotest_common.sh@940 -- # kill -0 2608294 00:15:04.469 21:29:30 -- common/autotest_common.sh@941 -- # uname 00:15:04.469 21:29:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:04.469 21:29:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2608294 00:15:04.469 21:29:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:04.469 21:29:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:04.469 21:29:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2608294' 00:15:04.469 killing process with pid 2608294 00:15:04.469 21:29:30 -- common/autotest_common.sh@955 -- # kill 2608294 00:15:04.469 Received shutdown signal, test time was about 1.000000 seconds 00:15:04.469 00:15:04.469 Latency(us) 00:15:04.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.469 =================================================================================================================== 00:15:04.469 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:04.469 21:29:30 -- common/autotest_common.sh@960 -- # wait 2608294 00:15:04.727 21:29:30 -- target/tls.sh@235 -- # killprocess 2608003 00:15:04.727 21:29:30 -- common/autotest_common.sh@936 -- # '[' -z 2608003 ']' 00:15:04.727 21:29:30 -- common/autotest_common.sh@940 -- # kill -0 2608003 00:15:04.727 21:29:30 -- common/autotest_common.sh@941 -- # uname 00:15:04.727 21:29:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:04.727 21:29:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2608003 00:15:04.727 21:29:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:04.727 21:29:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:04.727 21:29:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2608003' 00:15:04.727 killing process with pid 2608003 00:15:04.727 21:29:30 -- common/autotest_common.sh@955 -- # kill 2608003 00:15:04.727 [2024-04-24 21:29:30.385970] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:04.727 21:29:30 -- common/autotest_common.sh@960 -- # wait 2608003 00:15:05.294 21:29:30 -- target/tls.sh@238 -- # nvmfappstart 00:15:05.294 21:29:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:05.294 21:29:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:05.294 21:29:30 -- common/autotest_common.sh@10 -- # set +x 00:15:05.294 21:29:30 -- nvmf/common.sh@470 -- # nvmfpid=2608650 00:15:05.294 21:29:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:05.294 21:29:30 -- nvmf/common.sh@471 -- # waitforlisten 2608650 00:15:05.294 21:29:30 -- common/autotest_common.sh@817 -- # '[' -z 2608650 ']' 00:15:05.294 21:29:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.294 21:29:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:05.294 21:29:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.294 21:29:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:05.294 21:29:30 -- common/autotest_common.sh@10 -- # set +x 00:15:05.294 [2024-04-24 21:29:30.716796] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:15:05.294 [2024-04-24 21:29:30.716881] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.294 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.294 [2024-04-24 21:29:30.782900] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.294 [2024-04-24 21:29:30.886732] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.294 [2024-04-24 21:29:30.886789] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.294 [2024-04-24 21:29:30.886803] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:05.294 [2024-04-24 21:29:30.886814] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:05.294 [2024-04-24 21:29:30.886823] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.294 [2024-04-24 21:29:30.886851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.552 21:29:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:05.552 21:29:30 -- common/autotest_common.sh@850 -- # return 0 00:15:05.552 21:29:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:05.552 21:29:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:05.552 21:29:30 -- common/autotest_common.sh@10 -- # set +x 00:15:05.552 21:29:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.553 21:29:31 -- target/tls.sh@239 -- # rpc_cmd 00:15:05.553 21:29:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:05.553 21:29:31 -- common/autotest_common.sh@10 -- # set +x 00:15:05.553 [2024-04-24 21:29:31.032222] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.553 malloc0 00:15:05.553 [2024-04-24 21:29:31.064960] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:05.553 [2024-04-24 21:29:31.065223] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.553 21:29:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:05.553 21:29:31 -- target/tls.sh@252 -- # bdevperf_pid=2608720 00:15:05.553 21:29:31 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:05.553 21:29:31 -- target/tls.sh@254 -- # waitforlisten 2608720 /var/tmp/bdevperf.sock 00:15:05.553 21:29:31 -- common/autotest_common.sh@817 -- # '[' -z 2608720 ']' 00:15:05.553 21:29:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:05.553 21:29:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:05.553 21:29:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:05.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:05.553 21:29:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:05.553 21:29:31 -- common/autotest_common.sh@10 -- # set +x 00:15:05.553 [2024-04-24 21:29:31.134892] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:15:05.553 [2024-04-24 21:29:31.134968] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2608720 ] 00:15:05.553 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.553 [2024-04-24 21:29:31.195745] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.811 [2024-04-24 21:29:31.310142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.811 21:29:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:05.811 21:29:31 -- common/autotest_common.sh@850 -- # return 0 00:15:05.811 21:29:31 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vEaWycc1IQ 00:15:06.069 21:29:31 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:06.327 [2024-04-24 21:29:31.960570] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:06.586 nvme0n1 00:15:06.586 21:29:32 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:06.586 Running I/O for 1 seconds... 00:15:08.007 00:15:08.007 Latency(us) 00:15:08.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.007 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:08.007 Verification LBA range: start 0x0 length 0x2000 00:15:08.007 nvme0n1 : 1.08 1429.08 5.58 0.00 0.00 86978.68 5825.42 131266.18 00:15:08.007 =================================================================================================================== 00:15:08.007 Total : 1429.08 5.58 0.00 0.00 86978.68 5825.42 131266.18 00:15:08.007 0 00:15:08.007 21:29:33 -- target/tls.sh@263 -- # rpc_cmd save_config 00:15:08.007 21:29:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:08.007 21:29:33 -- common/autotest_common.sh@10 -- # set +x 00:15:08.007 21:29:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:08.007 21:29:33 -- target/tls.sh@263 -- # tgtcfg='{ 00:15:08.007 "subsystems": [ 00:15:08.007 { 00:15:08.007 "subsystem": "keyring", 00:15:08.007 "config": [ 00:15:08.007 { 00:15:08.007 "method": "keyring_file_add_key", 00:15:08.007 "params": { 00:15:08.007 "name": "key0", 00:15:08.007 "path": "/tmp/tmp.vEaWycc1IQ" 00:15:08.007 } 00:15:08.007 } 00:15:08.007 ] 00:15:08.007 }, 00:15:08.007 { 00:15:08.007 "subsystem": "iobuf", 00:15:08.007 "config": [ 00:15:08.007 { 00:15:08.007 "method": "iobuf_set_options", 00:15:08.007 "params": { 00:15:08.007 "small_pool_count": 8192, 00:15:08.007 "large_pool_count": 1024, 00:15:08.007 "small_bufsize": 8192, 00:15:08.007 "large_bufsize": 135168 00:15:08.007 } 00:15:08.007 } 00:15:08.007 ] 00:15:08.007 }, 00:15:08.007 { 00:15:08.007 "subsystem": "sock", 00:15:08.007 "config": [ 00:15:08.007 { 00:15:08.007 "method": "sock_impl_set_options", 00:15:08.007 "params": { 00:15:08.007 "impl_name": "posix", 00:15:08.007 "recv_buf_size": 2097152, 00:15:08.007 "send_buf_size": 2097152, 00:15:08.007 "enable_recv_pipe": true, 00:15:08.007 "enable_quickack": false, 00:15:08.007 "enable_placement_id": 0, 00:15:08.007 "enable_zerocopy_send_server": true, 00:15:08.007 "enable_zerocopy_send_client": false, 00:15:08.007 "zerocopy_threshold": 0, 00:15:08.007 "tls_version": 0, 00:15:08.007 "enable_ktls": false 00:15:08.007 } 00:15:08.007 }, 00:15:08.007 { 00:15:08.007 "method": "sock_impl_set_options", 00:15:08.007 "params": { 00:15:08.007 "impl_name": "ssl", 00:15:08.007 "recv_buf_size": 4096, 00:15:08.007 "send_buf_size": 4096, 00:15:08.007 "enable_recv_pipe": true, 00:15:08.007 "enable_quickack": false, 00:15:08.007 "enable_placement_id": 0, 00:15:08.007 "enable_zerocopy_send_server": true, 00:15:08.007 "enable_zerocopy_send_client": false, 00:15:08.007 "zerocopy_threshold": 0, 00:15:08.007 "tls_version": 0, 00:15:08.007 "enable_ktls": false 00:15:08.007 } 00:15:08.007 } 00:15:08.007 ] 00:15:08.007 }, 00:15:08.007 { 00:15:08.007 "subsystem": "vmd", 00:15:08.007 "config": [] 00:15:08.007 }, 00:15:08.007 { 00:15:08.007 "subsystem": "accel", 00:15:08.007 "config": [ 00:15:08.007 { 00:15:08.007 "method": "accel_set_options", 00:15:08.008 "params": { 00:15:08.008 "small_cache_size": 128, 00:15:08.008 "large_cache_size": 16, 00:15:08.008 "task_count": 2048, 00:15:08.008 "sequence_count": 2048, 00:15:08.008 "buf_count": 2048 00:15:08.008 } 00:15:08.008 } 00:15:08.008 ] 00:15:08.008 }, 00:15:08.008 { 00:15:08.008 "subsystem": "bdev", 00:15:08.008 "config": [ 00:15:08.008 { 00:15:08.008 "method": "bdev_set_options", 00:15:08.008 "params": { 00:15:08.008 "bdev_io_pool_size": 65535, 00:15:08.008 "bdev_io_cache_size": 256, 00:15:08.008 "bdev_auto_examine": true, 00:15:08.008 "iobuf_small_cache_size": 128, 00:15:08.008 "iobuf_large_cache_size": 16 00:15:08.008 } 00:15:08.008 }, 00:15:08.008 { 00:15:08.008 "method": "bdev_raid_set_options", 00:15:08.008 "params": { 00:15:08.008 "process_window_size_kb": 1024 00:15:08.008 } 00:15:08.008 }, 00:15:08.008 { 00:15:08.008 "method": "bdev_iscsi_set_options", 00:15:08.008 "params": { 00:15:08.008 "timeout_sec": 30 00:15:08.008 } 00:15:08.008 }, 00:15:08.008 { 00:15:08.008 "method": "bdev_nvme_set_options", 00:15:08.008 "params": { 00:15:08.008 "action_on_timeout": "none", 00:15:08.008 "timeout_us": 0, 00:15:08.008 "timeout_admin_us": 0, 00:15:08.008 "keep_alive_timeout_ms": 10000, 00:15:08.008 "arbitration_burst": 0, 00:15:08.008 "low_priority_weight": 0, 00:15:08.008 "medium_priority_weight": 0, 00:15:08.008 "high_priority_weight": 0, 00:15:08.008 "nvme_adminq_poll_period_us": 10000, 00:15:08.008 "nvme_ioq_poll_period_us": 0, 00:15:08.008 "io_queue_requests": 0, 00:15:08.008 "delay_cmd_submit": true, 00:15:08.008 "transport_retry_count": 4, 00:15:08.008 "bdev_retry_count": 3, 00:15:08.008 "transport_ack_timeout": 0, 00:15:08.008 "ctrlr_loss_timeout_sec": 0, 00:15:08.008 "reconnect_delay_sec": 0, 00:15:08.008 "fast_io_fail_timeout_sec": 0, 00:15:08.008 "disable_auto_failback": false, 00:15:08.008 "generate_uuids": false, 00:15:08.008 "transport_tos": 0, 00:15:08.008 "nvme_error_stat": false, 00:15:08.008 "rdma_srq_size": 0, 00:15:08.008 "io_path_stat": false, 00:15:08.008 "allow_accel_sequence": false, 00:15:08.008 "rdma_max_cq_size": 0, 00:15:08.008 "rdma_cm_event_timeout_ms": 0, 00:15:08.008 "dhchap_digests": [ 00:15:08.008 "sha256", 00:15:08.008 "sha384", 00:15:08.008 "sha512" 00:15:08.008 ], 00:15:08.008 "dhchap_dhgroups": [ 00:15:08.008 "null", 00:15:08.008 "ffdhe2048", 00:15:08.008 "ffdhe3072", 00:15:08.008 "ffdhe4096", 00:15:08.008 "ffdhe6144", 00:15:08.008 "ffdhe8192" 00:15:08.008 ] 00:15:08.008 } 00:15:08.008 }, 00:15:08.008 { 00:15:08.008 "method": "bdev_nvme_set_hotplug", 00:15:08.008 "params": { 00:15:08.008 "period_us": 100000, 00:15:08.008 "enable": false 00:15:08.008 } 00:15:08.008 }, 00:15:08.008 { 00:15:08.008 "method": "bdev_malloc_create", 00:15:08.008 "params": { 00:15:08.008 "name": "malloc0", 00:15:08.008 "num_blocks": 8192, 00:15:08.008 "block_size": 4096, 00:15:08.008 "physical_block_size": 4096, 00:15:08.008 "uuid": "d5c344e3-76ca-4439-b28d-134f15af31bd", 00:15:08.008 "optimal_io_boundary": 0 00:15:08.008 } 00:15:08.008 }, 00:15:08.008 { 00:15:08.008 "method": "bdev_wait_for_examine" 00:15:08.008 } 00:15:08.008 ] 00:15:08.008 }, 00:15:08.008 { 00:15:08.008 "subsystem": "nbd", 00:15:08.008 "config": [] 00:15:08.008 }, 00:15:08.008 { 00:15:08.008 "subsystem": "scheduler", 00:15:08.008 "config": [ 00:15:08.008 { 00:15:08.008 "method": "framework_set_scheduler", 00:15:08.008 "params": { 00:15:08.008 "name": "static" 00:15:08.008 } 00:15:08.008 } 00:15:08.008 ] 00:15:08.008 }, 00:15:08.008 { 00:15:08.008 "subsystem": "nvmf", 00:15:08.008 "config": [ 00:15:08.008 { 00:15:08.008 "method": "nvmf_set_config", 00:15:08.008 "params": { 00:15:08.008 "discovery_filter": "match_any", 00:15:08.008 "admin_cmd_passthru": { 00:15:08.008 "identify_ctrlr": false 00:15:08.008 } 00:15:08.008 } 00:15:08.008 }, 00:15:08.008 { 00:15:08.008 "method": "nvmf_set_max_subsystems", 00:15:08.008 "params": { 00:15:08.008 "max_subsystems": 1024 00:15:08.008 } 00:15:08.008 }, 00:15:08.008 { 00:15:08.008 "method": "nvmf_set_crdt", 00:15:08.008 "params": { 00:15:08.008 "crdt1": 0, 00:15:08.008 "crdt2": 0, 00:15:08.008 "crdt3": 0 00:15:08.008 } 00:15:08.008 }, 00:15:08.008 { 00:15:08.008 "method": "nvmf_create_transport", 00:15:08.008 "params": { 00:15:08.008 "trtype": "TCP", 00:15:08.008 "max_queue_depth": 128, 00:15:08.008 "max_io_qpairs_per_ctrlr": 127, 00:15:08.008 "in_capsule_data_size": 4096, 00:15:08.008 "max_io_size": 131072, 00:15:08.008 "io_unit_size": 131072, 00:15:08.008 "max_aq_depth": 128, 00:15:08.008 "num_shared_buffers": 511, 00:15:08.008 "buf_cache_size": 4294967295, 00:15:08.008 "dif_insert_or_strip": false, 00:15:08.008 "zcopy": false, 00:15:08.008 "c2h_success": false, 00:15:08.008 "sock_priority": 0, 00:15:08.008 "abort_timeout_sec": 1, 00:15:08.008 "ack_timeout": 0 00:15:08.008 } 00:15:08.008 }, 00:15:08.008 { 00:15:08.008 "method": "nvmf_create_subsystem", 00:15:08.008 "params": { 00:15:08.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.008 "allow_any_host": false, 00:15:08.008 "serial_number": "00000000000000000000", 00:15:08.008 "model_number": "SPDK bdev Controller", 00:15:08.008 "max_namespaces": 32, 00:15:08.008 "min_cntlid": 1, 00:15:08.008 "max_cntlid": 65519, 00:15:08.008 "ana_reporting": false 00:15:08.008 } 00:15:08.008 }, 00:15:08.008 { 00:15:08.008 "method": "nvmf_subsystem_add_host", 00:15:08.008 "params": { 00:15:08.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.008 "host": "nqn.2016-06.io.spdk:host1", 00:15:08.008 "psk": "key0" 00:15:08.008 } 00:15:08.008 }, 00:15:08.008 { 00:15:08.008 "method": "nvmf_subsystem_add_ns", 00:15:08.008 "params": { 00:15:08.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.008 "namespace": { 00:15:08.008 "nsid": 1, 00:15:08.008 "bdev_name": "malloc0", 00:15:08.008 "nguid": "D5C344E376CA4439B28D134F15AF31BD", 00:15:08.008 "uuid": "d5c344e3-76ca-4439-b28d-134f15af31bd", 00:15:08.008 "no_auto_visible": false 00:15:08.008 } 00:15:08.008 } 00:15:08.008 }, 00:15:08.008 { 00:15:08.008 "method": "nvmf_subsystem_add_listener", 00:15:08.008 "params": { 00:15:08.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.008 "listen_address": { 00:15:08.008 "trtype": "TCP", 00:15:08.008 "adrfam": "IPv4", 00:15:08.008 "traddr": "10.0.0.2", 00:15:08.008 "trsvcid": "4420" 00:15:08.008 }, 00:15:08.008 "secure_channel": true 00:15:08.008 } 00:15:08.008 } 00:15:08.008 ] 00:15:08.008 } 00:15:08.008 ] 00:15:08.008 }' 00:15:08.008 21:29:33 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:08.269 21:29:33 -- target/tls.sh@264 -- # bperfcfg='{ 00:15:08.269 "subsystems": [ 00:15:08.269 { 00:15:08.269 "subsystem": "keyring", 00:15:08.269 "config": [ 00:15:08.269 { 00:15:08.269 "method": "keyring_file_add_key", 00:15:08.269 "params": { 00:15:08.269 "name": "key0", 00:15:08.269 "path": "/tmp/tmp.vEaWycc1IQ" 00:15:08.269 } 00:15:08.269 } 00:15:08.269 ] 00:15:08.269 }, 00:15:08.269 { 00:15:08.269 "subsystem": "iobuf", 00:15:08.269 "config": [ 00:15:08.269 { 00:15:08.269 "method": "iobuf_set_options", 00:15:08.269 "params": { 00:15:08.269 "small_pool_count": 8192, 00:15:08.269 "large_pool_count": 1024, 00:15:08.269 "small_bufsize": 8192, 00:15:08.269 "large_bufsize": 135168 00:15:08.269 } 00:15:08.269 } 00:15:08.269 ] 00:15:08.269 }, 00:15:08.269 { 00:15:08.269 "subsystem": "sock", 00:15:08.269 "config": [ 00:15:08.269 { 00:15:08.269 "method": "sock_impl_set_options", 00:15:08.269 "params": { 00:15:08.269 "impl_name": "posix", 00:15:08.269 "recv_buf_size": 2097152, 00:15:08.269 "send_buf_size": 2097152, 00:15:08.269 "enable_recv_pipe": true, 00:15:08.269 "enable_quickack": false, 00:15:08.269 "enable_placement_id": 0, 00:15:08.269 "enable_zerocopy_send_server": true, 00:15:08.269 "enable_zerocopy_send_client": false, 00:15:08.269 "zerocopy_threshold": 0, 00:15:08.269 "tls_version": 0, 00:15:08.269 "enable_ktls": false 00:15:08.269 } 00:15:08.269 }, 00:15:08.269 { 00:15:08.269 "method": "sock_impl_set_options", 00:15:08.269 "params": { 00:15:08.269 "impl_name": "ssl", 00:15:08.269 "recv_buf_size": 4096, 00:15:08.269 "send_buf_size": 4096, 00:15:08.269 "enable_recv_pipe": true, 00:15:08.269 "enable_quickack": false, 00:15:08.269 "enable_placement_id": 0, 00:15:08.269 "enable_zerocopy_send_server": true, 00:15:08.269 "enable_zerocopy_send_client": false, 00:15:08.269 "zerocopy_threshold": 0, 00:15:08.269 "tls_version": 0, 00:15:08.269 "enable_ktls": false 00:15:08.269 } 00:15:08.269 } 00:15:08.269 ] 00:15:08.269 }, 00:15:08.269 { 00:15:08.269 "subsystem": "vmd", 00:15:08.269 "config": [] 00:15:08.269 }, 00:15:08.269 { 00:15:08.269 "subsystem": "accel", 00:15:08.269 "config": [ 00:15:08.269 { 00:15:08.269 "method": "accel_set_options", 00:15:08.269 "params": { 00:15:08.269 "small_cache_size": 128, 00:15:08.269 "large_cache_size": 16, 00:15:08.269 "task_count": 2048, 00:15:08.270 "sequence_count": 2048, 00:15:08.270 "buf_count": 2048 00:15:08.270 } 00:15:08.270 } 00:15:08.270 ] 00:15:08.270 }, 00:15:08.270 { 00:15:08.270 "subsystem": "bdev", 00:15:08.270 "config": [ 00:15:08.270 { 00:15:08.270 "method": "bdev_set_options", 00:15:08.270 "params": { 00:15:08.270 "bdev_io_pool_size": 65535, 00:15:08.270 "bdev_io_cache_size": 256, 00:15:08.270 "bdev_auto_examine": true, 00:15:08.270 "iobuf_small_cache_size": 128, 00:15:08.270 "iobuf_large_cache_size": 16 00:15:08.270 } 00:15:08.270 }, 00:15:08.270 { 00:15:08.270 "method": "bdev_raid_set_options", 00:15:08.270 "params": { 00:15:08.270 "process_window_size_kb": 1024 00:15:08.270 } 00:15:08.270 }, 00:15:08.270 { 00:15:08.270 "method": "bdev_iscsi_set_options", 00:15:08.270 "params": { 00:15:08.270 "timeout_sec": 30 00:15:08.270 } 00:15:08.270 }, 00:15:08.270 { 00:15:08.270 "method": "bdev_nvme_set_options", 00:15:08.270 "params": { 00:15:08.270 "action_on_timeout": "none", 00:15:08.270 "timeout_us": 0, 00:15:08.270 "timeout_admin_us": 0, 00:15:08.270 "keep_alive_timeout_ms": 10000, 00:15:08.270 "arbitration_burst": 0, 00:15:08.270 "low_priority_weight": 0, 00:15:08.270 "medium_priority_weight": 0, 00:15:08.270 "high_priority_weight": 0, 00:15:08.270 "nvme_adminq_poll_period_us": 10000, 00:15:08.270 "nvme_ioq_poll_period_us": 0, 00:15:08.270 "io_queue_requests": 512, 00:15:08.270 "delay_cmd_submit": true, 00:15:08.270 "transport_retry_count": 4, 00:15:08.270 "bdev_retry_count": 3, 00:15:08.270 "transport_ack_timeout": 0, 00:15:08.270 "ctrlr_loss_timeout_sec": 0, 00:15:08.270 "reconnect_delay_sec": 0, 00:15:08.270 "fast_io_fail_timeout_sec": 0, 00:15:08.270 "disable_auto_failback": false, 00:15:08.270 "generate_uuids": false, 00:15:08.270 "transport_tos": 0, 00:15:08.270 "nvme_error_stat": false, 00:15:08.270 "rdma_srq_size": 0, 00:15:08.270 "io_path_stat": false, 00:15:08.270 "allow_accel_sequence": false, 00:15:08.270 "rdma_max_cq_size": 0, 00:15:08.270 "rdma_cm_event_timeout_ms": 0, 00:15:08.270 "dhchap_digests": [ 00:15:08.270 "sha256", 00:15:08.270 "sha384", 00:15:08.270 "sha512" 00:15:08.270 ], 00:15:08.270 "dhchap_dhgroups": [ 00:15:08.270 "null", 00:15:08.270 "ffdhe2048", 00:15:08.270 "ffdhe3072", 00:15:08.270 "ffdhe4096", 00:15:08.270 "ffdhe6144", 00:15:08.270 "ffdhe8192" 00:15:08.270 ] 00:15:08.270 } 00:15:08.270 }, 00:15:08.270 { 00:15:08.270 "method": "bdev_nvme_attach_controller", 00:15:08.270 "params": { 00:15:08.270 "name": "nvme0", 00:15:08.270 "trtype": "TCP", 00:15:08.270 "adrfam": "IPv4", 00:15:08.270 "traddr": "10.0.0.2", 00:15:08.270 "trsvcid": "4420", 00:15:08.270 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.270 "prchk_reftag": false, 00:15:08.270 "prchk_guard": false, 00:15:08.270 "ctrlr_loss_timeout_sec": 0, 00:15:08.270 "reconnect_delay_sec": 0, 00:15:08.270 "fast_io_fail_timeout_sec": 0, 00:15:08.270 "psk": "key0", 00:15:08.270 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:08.270 "hdgst": false, 00:15:08.270 "ddgst": false 00:15:08.270 } 00:15:08.270 }, 00:15:08.270 { 00:15:08.270 "method": "bdev_nvme_set_hotplug", 00:15:08.270 "params": { 00:15:08.270 "period_us": 100000, 00:15:08.270 "enable": false 00:15:08.270 } 00:15:08.270 }, 00:15:08.270 { 00:15:08.270 "method": "bdev_enable_histogram", 00:15:08.270 "params": { 00:15:08.270 "name": "nvme0n1", 00:15:08.270 "enable": true 00:15:08.270 } 00:15:08.270 }, 00:15:08.270 { 00:15:08.270 "method": "bdev_wait_for_examine" 00:15:08.270 } 00:15:08.270 ] 00:15:08.270 }, 00:15:08.270 { 00:15:08.270 "subsystem": "nbd", 00:15:08.270 "config": [] 00:15:08.270 } 00:15:08.270 ] 00:15:08.270 }' 00:15:08.270 21:29:33 -- target/tls.sh@266 -- # killprocess 2608720 00:15:08.270 21:29:33 -- common/autotest_common.sh@936 -- # '[' -z 2608720 ']' 00:15:08.270 21:29:33 -- common/autotest_common.sh@940 -- # kill -0 2608720 00:15:08.270 21:29:33 -- common/autotest_common.sh@941 -- # uname 00:15:08.270 21:29:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:08.270 21:29:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2608720 00:15:08.270 21:29:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:08.270 21:29:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:08.270 21:29:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2608720' 00:15:08.270 killing process with pid 2608720 00:15:08.270 21:29:33 -- common/autotest_common.sh@955 -- # kill 2608720 00:15:08.270 Received shutdown signal, test time was about 1.000000 seconds 00:15:08.270 00:15:08.270 Latency(us) 00:15:08.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.270 =================================================================================================================== 00:15:08.270 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:08.270 21:29:33 -- common/autotest_common.sh@960 -- # wait 2608720 00:15:08.529 21:29:34 -- target/tls.sh@267 -- # killprocess 2608650 00:15:08.529 21:29:34 -- common/autotest_common.sh@936 -- # '[' -z 2608650 ']' 00:15:08.529 21:29:34 -- common/autotest_common.sh@940 -- # kill -0 2608650 00:15:08.529 21:29:34 -- common/autotest_common.sh@941 -- # uname 00:15:08.529 21:29:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:08.529 21:29:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2608650 00:15:08.529 21:29:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:08.529 21:29:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:08.529 21:29:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2608650' 00:15:08.529 killing process with pid 2608650 00:15:08.529 21:29:34 -- common/autotest_common.sh@955 -- # kill 2608650 00:15:08.529 21:29:34 -- common/autotest_common.sh@960 -- # wait 2608650 00:15:08.787 21:29:34 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:15:08.787 21:29:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:08.787 21:29:34 -- target/tls.sh@269 -- # echo '{ 00:15:08.787 "subsystems": [ 00:15:08.787 { 00:15:08.787 "subsystem": "keyring", 00:15:08.787 "config": [ 00:15:08.787 { 00:15:08.787 "method": "keyring_file_add_key", 00:15:08.787 "params": { 00:15:08.787 "name": "key0", 00:15:08.787 "path": "/tmp/tmp.vEaWycc1IQ" 00:15:08.787 } 00:15:08.787 } 00:15:08.787 ] 00:15:08.787 }, 00:15:08.787 { 00:15:08.787 "subsystem": "iobuf", 00:15:08.787 "config": [ 00:15:08.787 { 00:15:08.787 "method": "iobuf_set_options", 00:15:08.787 "params": { 00:15:08.787 "small_pool_count": 8192, 00:15:08.787 "large_pool_count": 1024, 00:15:08.787 "small_bufsize": 8192, 00:15:08.787 "large_bufsize": 135168 00:15:08.787 } 00:15:08.787 } 00:15:08.787 ] 00:15:08.787 }, 00:15:08.787 { 00:15:08.787 "subsystem": "sock", 00:15:08.787 "config": [ 00:15:08.787 { 00:15:08.787 "method": "sock_impl_set_options", 00:15:08.787 "params": { 00:15:08.787 "impl_name": "posix", 00:15:08.787 "recv_buf_size": 2097152, 00:15:08.787 "send_buf_size": 2097152, 00:15:08.787 "enable_recv_pipe": true, 00:15:08.787 "enable_quickack": false, 00:15:08.787 "enable_placement_id": 0, 00:15:08.787 "enable_zerocopy_send_server": true, 00:15:08.787 "enable_zerocopy_send_client": false, 00:15:08.787 "zerocopy_threshold": 0, 00:15:08.787 "tls_version": 0, 00:15:08.787 "enable_ktls": false 00:15:08.787 } 00:15:08.787 }, 00:15:08.787 { 00:15:08.787 "method": "sock_impl_set_options", 00:15:08.787 "params": { 00:15:08.787 "impl_name": "ssl", 00:15:08.787 "recv_buf_size": 4096, 00:15:08.787 "send_buf_size": 4096, 00:15:08.787 "enable_recv_pipe": true, 00:15:08.787 "enable_quickack": false, 00:15:08.787 "enable_placement_id": 0, 00:15:08.787 "enable_zerocopy_send_server": true, 00:15:08.787 "enable_zerocopy_send_client": false, 00:15:08.787 "zerocopy_threshold": 0, 00:15:08.787 "tls_version": 0, 00:15:08.787 "enable_ktls": false 00:15:08.787 } 00:15:08.787 } 00:15:08.787 ] 00:15:08.787 }, 00:15:08.787 { 00:15:08.787 "subsystem": "vmd", 00:15:08.787 "config": [] 00:15:08.787 }, 00:15:08.787 { 00:15:08.787 "subsystem": "accel", 00:15:08.787 "config": [ 00:15:08.787 { 00:15:08.787 "method": "accel_set_options", 00:15:08.787 "params": { 00:15:08.787 "small_cache_size": 128, 00:15:08.787 "large_cache_size": 16, 00:15:08.787 "task_count": 2048, 00:15:08.787 "sequence_count": 2048, 00:15:08.787 "buf_count": 2048 00:15:08.787 } 00:15:08.787 } 00:15:08.787 ] 00:15:08.787 }, 00:15:08.787 { 00:15:08.787 "subsystem": "bdev", 00:15:08.787 "config": [ 00:15:08.787 { 00:15:08.787 "method": "bdev_set_options", 00:15:08.787 "params": { 00:15:08.787 "bdev_io_pool_size": 65535, 00:15:08.787 "bdev_io_cache_size": 256, 00:15:08.787 "bdev_auto_examine": true, 00:15:08.787 "iobuf_small_cache_size": 128, 00:15:08.787 "iobuf_large_cache_size": 16 00:15:08.787 } 00:15:08.787 }, 00:15:08.787 { 00:15:08.787 "method": "bdev_raid_set_options", 00:15:08.787 "params": { 00:15:08.787 "process_window_size_kb": 1024 00:15:08.788 } 00:15:08.788 }, 00:15:08.788 { 00:15:08.788 "method": "bdev_iscsi_set_options", 00:15:08.788 "params": { 00:15:08.788 "timeout_sec": 30 00:15:08.788 } 00:15:08.788 }, 00:15:08.788 { 00:15:08.788 "method": "bdev_nvme_set_options", 00:15:08.788 "params": { 00:15:08.788 "action_on_timeout": "none", 00:15:08.788 "timeout_us": 0, 00:15:08.788 "timeout_admin_us": 0, 00:15:08.788 "keep_alive_timeout_ms": 10000, 00:15:08.788 "arbitration_burst": 0, 00:15:08.788 "low_priority_weight": 0, 00:15:08.788 "medium_priority_weight": 0, 00:15:08.788 "high_priority_weight": 0, 00:15:08.788 "nvme_adminq_poll_period_us": 10000, 00:15:08.788 "nvme_ioq_poll_period_us": 0, 00:15:08.788 "io_queue_requests": 0, 00:15:08.788 "delay_cmd_submit": true, 00:15:08.788 "transport_retry_count": 4, 00:15:08.788 "bdev_retry_count": 3, 00:15:08.788 "transport_ack_timeout": 0, 00:15:08.788 "ctrlr_loss_timeout_sec": 0, 00:15:08.788 "reconnect_delay_sec": 0, 00:15:08.788 "fast_io_fail_timeout_sec": 0, 00:15:08.788 "disable_auto_failback": false, 00:15:08.788 "generate_uuids": false, 00:15:08.788 "transport_tos": 0, 00:15:08.788 "nvme_error_stat": false, 00:15:08.788 "rdma_srq_size": 0, 00:15:08.788 "io_path_stat": false, 00:15:08.788 "allow_accel_sequence": false, 00:15:08.788 "rdma_max_cq_size": 0, 00:15:08.788 "rdma_cm_event_timeout_ms": 0, 00:15:08.788 "dhchap_digests": [ 00:15:08.788 "sha256", 00:15:08.788 "sha384", 00:15:08.788 "sha512" 00:15:08.788 ], 00:15:08.788 "dhchap_dhgroups": [ 00:15:08.788 "null", 00:15:08.788 "ffdhe2048", 00:15:08.788 "ffdhe3072", 00:15:08.788 "ffdhe4096", 00:15:08.788 "ffdhe6144", 00:15:08.788 "ffdhe8192" 00:15:08.788 ] 00:15:08.788 } 00:15:08.788 }, 00:15:08.788 { 00:15:08.788 "method": "bdev_nvme_set_hotplug", 00:15:08.788 "params": { 00:15:08.788 "period_us": 100000, 00:15:08.788 "enable": false 00:15:08.788 } 00:15:08.788 }, 00:15:08.788 { 00:15:08.788 "method": "bdev_malloc_create", 00:15:08.788 "params": { 00:15:08.788 "name": "malloc0", 00:15:08.788 "num_blocks": 8192, 00:15:08.788 "block_size": 4096, 00:15:08.788 "physical_block_size": 4096, 00:15:08.788 "uuid": "d5c344e3-76ca-4439-b28d-134f15af31bd", 00:15:08.788 "optimal_io_boundary": 0 00:15:08.788 } 00:15:08.788 }, 00:15:08.788 { 00:15:08.788 "method": "bdev_wait_for_examine" 00:15:08.788 } 00:15:08.788 ] 00:15:08.788 }, 00:15:08.788 { 00:15:08.788 "subsystem": "nbd", 00:15:08.788 "config": [] 00:15:08.788 }, 00:15:08.788 { 00:15:08.788 "subsystem": "scheduler", 00:15:08.788 "config": [ 00:15:08.788 { 00:15:08.788 "method": "framework_set_scheduler", 00:15:08.788 "params": { 00:15:08.788 "name": "static" 00:15:08.788 } 00:15:08.788 } 00:15:08.788 ] 00:15:08.788 }, 00:15:08.788 { 00:15:08.788 "subsystem": "nvmf", 00:15:08.788 "config": [ 00:15:08.788 { 00:15:08.788 "method": "nvmf_set_config", 00:15:08.788 "params": { 00:15:08.788 "discovery_filter": "match_any", 00:15:08.788 "admin_cmd_passthru": { 00:15:08.788 "identify_ctrlr": false 00:15:08.788 } 00:15:08.788 } 00:15:08.788 }, 00:15:08.788 { 00:15:08.788 "method": "nvmf_set_max_subsystems", 00:15:08.788 "params": { 00:15:08.788 "max_subsystems": 1024 00:15:08.788 } 00:15:08.788 }, 00:15:08.788 { 00:15:08.788 "method": "nvmf_set_crdt", 00:15:08.788 "params": { 00:15:08.788 "crdt1": 0, 00:15:08.788 "crdt2": 0, 00:15:08.788 "crdt3": 0 00:15:08.788 } 00:15:08.788 }, 00:15:08.788 { 00:15:08.788 "method": "nvmf_create_transport", 00:15:08.788 "params": { 00:15:08.788 "trtype": "TCP", 00:15:08.788 "max_queue_depth": 128, 00:15:08.788 "max_io_qpairs_per_ctrlr": 127, 00:15:08.788 "in_capsule_data_size": 4096, 00:15:08.788 "max_io_size": 131072, 00:15:08.788 "io_unit_size": 131072, 00:15:08.788 "max_aq_depth": 128, 00:15:08.788 "num_shared_buffers": 511, 00:15:08.788 "buf_cache_size": 4294967295, 00:15:08.788 "dif_insert_or_strip": false, 00:15:08.788 "zcopy": false, 00:15:08.788 "c2h_success": false, 00:15:08.788 "sock_priority": 0, 00:15:08.788 "abort_timeout_sec": 1, 00:15:08.788 "ack_timeout": 0 00:15:08.788 } 00:15:08.788 }, 00:15:08.788 { 00:15:08.788 "method": "nvmf_create_subsystem", 00:15:08.788 "params": { 00:15:08.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.788 "allow_any_host": false, 00:15:08.788 "serial_number": "00000000000000000000", 00:15:08.788 "model_number": "SPDK bdev Controller", 00:15:08.788 "max_namespaces": 32, 00:15:08.788 "min_cntlid": 1, 00:15:08.788 "max_cntlid": 65519, 00:15:08.788 "ana_reporting": false 00:15:08.788 } 00:15:08.788 }, 00:15:08.788 { 00:15:08.788 "method": "nvmf_subsystem_add_host", 00:15:08.788 "params": { 00:15:08.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.788 "host": "nqn.2016-06.io.spdk:host1", 00:15:08.788 "psk": "key0" 00:15:08.788 } 00:15:08.788 }, 00:15:08.788 { 00:15:08.788 "method": "nvmf_subsystem_add_ns", 00:15:08.788 "params": { 00:15:08.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.788 "namespace": { 00:15:08.788 "nsid": 1, 00:15:08.788 "bdev_name": "malloc0", 00:15:08.788 "nguid": "D5C344E376CA4439B28D134F15AF31BD", 00:15:08.788 "uuid": "d5c344e3-76ca-4439-b28d-134f15af31bd", 00:15:08.788 "no_auto_visible": false 00:15:08.788 } 00:15:08.788 } 00:15:08.788 }, 00:15:08.788 { 00:15:08.788 "method": "nvmf_subsystem_add_listener", 00:15:08.788 "params": { 00:15:08.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.788 "listen_address": { 00:15:08.788 "trtype": "TCP", 00:15:08.788 "adrfam": "IPv4", 00:15:08.788 "traddr": "10.0.0.2", 00:15:08.788 "trsvcid": "4420" 00:15:08.788 }, 00:15:08.788 "secure_channel": true 00:15:08.788 } 00:15:08.788 } 00:15:08.788 ] 00:15:08.788 } 00:15:08.788 ] 00:15:08.788 }' 00:15:08.788 21:29:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:08.788 21:29:34 -- common/autotest_common.sh@10 -- # set +x 00:15:08.788 21:29:34 -- nvmf/common.sh@470 -- # nvmfpid=2609137 00:15:08.788 21:29:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:08.788 21:29:34 -- nvmf/common.sh@471 -- # waitforlisten 2609137 00:15:08.788 21:29:34 -- common/autotest_common.sh@817 -- # '[' -z 2609137 ']' 00:15:08.788 21:29:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.788 21:29:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:08.788 21:29:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.788 21:29:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:08.788 21:29:34 -- common/autotest_common.sh@10 -- # set +x 00:15:08.788 [2024-04-24 21:29:34.381923] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:15:08.788 [2024-04-24 21:29:34.382042] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.788 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.788 [2024-04-24 21:29:34.451203] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.047 [2024-04-24 21:29:34.561129] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.047 [2024-04-24 21:29:34.561196] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.047 [2024-04-24 21:29:34.561221] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.047 [2024-04-24 21:29:34.561235] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.047 [2024-04-24 21:29:34.561246] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.047 [2024-04-24 21:29:34.561358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.307 [2024-04-24 21:29:34.801451] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:09.307 [2024-04-24 21:29:34.833464] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:09.307 [2024-04-24 21:29:34.844850] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.877 21:29:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:09.877 21:29:35 -- common/autotest_common.sh@850 -- # return 0 00:15:09.877 21:29:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:09.877 21:29:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:09.877 21:29:35 -- common/autotest_common.sh@10 -- # set +x 00:15:09.877 21:29:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.877 21:29:35 -- target/tls.sh@272 -- # bdevperf_pid=2609287 00:15:09.877 21:29:35 -- target/tls.sh@273 -- # waitforlisten 2609287 /var/tmp/bdevperf.sock 00:15:09.877 21:29:35 -- common/autotest_common.sh@817 -- # '[' -z 2609287 ']' 00:15:09.877 21:29:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:09.877 21:29:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:09.877 21:29:35 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:09.877 21:29:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:09.877 21:29:35 -- target/tls.sh@270 -- # echo '{ 00:15:09.877 "subsystems": [ 00:15:09.877 { 00:15:09.877 "subsystem": "keyring", 00:15:09.877 "config": [ 00:15:09.877 { 00:15:09.877 "method": "keyring_file_add_key", 00:15:09.877 "params": { 00:15:09.877 "name": "key0", 00:15:09.877 "path": "/tmp/tmp.vEaWycc1IQ" 00:15:09.877 } 00:15:09.877 } 00:15:09.877 ] 00:15:09.877 }, 00:15:09.878 { 00:15:09.878 "subsystem": "iobuf", 00:15:09.878 "config": [ 00:15:09.878 { 00:15:09.878 "method": "iobuf_set_options", 00:15:09.878 "params": { 00:15:09.878 "small_pool_count": 8192, 00:15:09.878 "large_pool_count": 1024, 00:15:09.878 "small_bufsize": 8192, 00:15:09.878 "large_bufsize": 135168 00:15:09.878 } 00:15:09.878 } 00:15:09.878 ] 00:15:09.878 }, 00:15:09.878 { 00:15:09.878 "subsystem": "sock", 00:15:09.878 "config": [ 00:15:09.878 { 00:15:09.878 "method": "sock_impl_set_options", 00:15:09.878 "params": { 00:15:09.878 "impl_name": "posix", 00:15:09.878 "recv_buf_size": 2097152, 00:15:09.878 "send_buf_size": 2097152, 00:15:09.878 "enable_recv_pipe": true, 00:15:09.878 "enable_quickack": false, 00:15:09.878 "enable_placement_id": 0, 00:15:09.878 "enable_zerocopy_send_server": true, 00:15:09.878 "enable_zerocopy_send_client": false, 00:15:09.878 "zerocopy_threshold": 0, 00:15:09.878 "tls_version": 0, 00:15:09.878 "enable_ktls": false 00:15:09.878 } 00:15:09.878 }, 00:15:09.878 { 00:15:09.878 "method": "sock_impl_set_options", 00:15:09.878 "params": { 00:15:09.878 "impl_name": "ssl", 00:15:09.878 "recv_buf_size": 4096, 00:15:09.878 "send_buf_size": 4096, 00:15:09.878 "enable_recv_pipe": true, 00:15:09.878 "enable_quickack": false, 00:15:09.878 "enable_placement_id": 0, 00:15:09.878 "enable_zerocopy_send_server": true, 00:15:09.878 "enable_zerocopy_send_client": false, 00:15:09.878 "zerocopy_threshold": 0, 00:15:09.878 "tls_version": 0, 00:15:09.878 "enable_ktls": false 00:15:09.878 } 00:15:09.878 } 00:15:09.878 ] 00:15:09.878 }, 00:15:09.878 { 00:15:09.878 "subsystem": "vmd", 00:15:09.878 "config": [] 00:15:09.878 }, 00:15:09.878 { 00:15:09.878 "subsystem": "accel", 00:15:09.878 "config": [ 00:15:09.878 { 00:15:09.878 "method": "accel_set_options", 00:15:09.878 "params": { 00:15:09.878 "small_cache_size": 128, 00:15:09.878 "large_cache_size": 16, 00:15:09.878 "task_count": 2048, 00:15:09.878 "sequence_count": 2048, 00:15:09.878 "buf_count": 2048 00:15:09.878 } 00:15:09.878 } 00:15:09.878 ] 00:15:09.878 }, 00:15:09.878 { 00:15:09.878 "subsystem": "bdev", 00:15:09.878 "config": [ 00:15:09.878 { 00:15:09.878 "method": "bdev_set_options", 00:15:09.878 "params": { 00:15:09.878 "bdev_io_pool_size": 65535, 00:15:09.878 "bdev_io_cache_size": 256, 00:15:09.878 "bdev_auto_examine": true, 00:15:09.878 "iobuf_small_cache_size": 128, 00:15:09.878 "iobuf_large_cache_size": 16 00:15:09.878 } 00:15:09.878 }, 00:15:09.878 { 00:15:09.878 "method": "bdev_raid_set_options", 00:15:09.878 "params": { 00:15:09.878 "process_window_size_kb": 1024 00:15:09.878 } 00:15:09.878 }, 00:15:09.878 { 00:15:09.878 "method": "bdev_iscsi_set_options", 00:15:09.878 "params": { 00:15:09.878 "timeout_sec": 30 00:15:09.878 } 00:15:09.878 }, 00:15:09.878 { 00:15:09.878 "method": "bdev_nvme_set_options", 00:15:09.878 "params": { 00:15:09.878 "action_on_timeout": "none", 00:15:09.878 "timeout_us": 0, 00:15:09.878 "timeout_admin_us": 0, 00:15:09.878 "keep_alive_timeout_ms": 10000, 00:15:09.878 "arbitration_burst": 0, 00:15:09.878 "low_priority_weight": 0, 00:15:09.878 "medium_priority_weight": 0, 00:15:09.878 "high_priority_weight": 0, 00:15:09.878 "nvme_adminq_poll_period_us": 10000, 00:15:09.878 "nvme_ioq_poll_period_us": 0, 00:15:09.878 "io_queue_requests": 512, 00:15:09.878 "delay_cmd_submit": true, 00:15:09.878 "transport_retry_count": 4, 00:15:09.878 "bdev_retry_count": 3, 00:15:09.878 "transport_ack_timeout": 0, 00:15:09.878 "ctrlr_loss_timeout_sec": 0, 00:15:09.878 "reconnect_delay_sec": 0, 00:15:09.878 "fast_io_fail_timeout_sec": 0, 00:15:09.878 "disable_auto_failback": false, 00:15:09.878 "generate_uuids": false, 00:15:09.878 "transport_tos": 0, 00:15:09.878 "nvme_error_stat": false, 00:15:09.878 "rdma_srq_size": 0, 00:15:09.878 "io_path_stat": false, 00:15:09.878 "allow_accel_sequence": false, 00:15:09.878 "rdma_max_cq_size": 0, 00:15:09.878 "rdma_cm_event_timeout_ms": 0, 00:15:09.878 "dhchap_digests": [ 00:15:09.878 "sha256", 00:15:09.878 "sha384", 00:15:09.878 "sha512" 00:15:09.878 ], 00:15:09.878 "dhchap_dhgroups": [ 00:15:09.878 "null", 00:15:09.878 "ffdhe2048", 00:15:09.878 "ffdhe3072", 00:15:09.878 "ffdhe4Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:09.878 096", 00:15:09.878 "ffdhe6144", 00:15:09.878 "ffdhe8192" 00:15:09.878 ] 00:15:09.878 } 00:15:09.878 }, 00:15:09.878 { 00:15:09.878 "method": "bdev_nvme_attach_controller", 00:15:09.878 "params": { 00:15:09.878 "name": "nvme0", 00:15:09.878 "trtype": "TCP", 00:15:09.878 "adrfam": "IPv4", 00:15:09.878 "traddr": "10.0.0.2", 00:15:09.878 "trsvcid": "4420", 00:15:09.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.878 "prchk_reftag": false, 00:15:09.878 "prchk_guard": false, 00:15:09.878 "ctrlr_loss_timeout_sec": 0, 00:15:09.878 "reconnect_delay_sec": 0, 00:15:09.878 "fast_io_fail_timeout_sec": 0, 00:15:09.878 "psk": "key0", 00:15:09.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:09.878 "hdgst": false, 00:15:09.878 "ddgst": false 00:15:09.878 } 00:15:09.878 }, 00:15:09.878 { 00:15:09.878 "method": "bdev_nvme_set_hotplug", 00:15:09.878 "params": { 00:15:09.878 "period_us": 100000, 00:15:09.878 "enable": false 00:15:09.878 } 00:15:09.878 }, 00:15:09.878 { 00:15:09.878 "method": "bdev_enable_histogram", 00:15:09.878 "params": { 00:15:09.878 "name": "nvme0n1", 00:15:09.878 "enable": true 00:15:09.878 } 00:15:09.878 }, 00:15:09.878 { 00:15:09.878 "method": "bdev_wait_for_examine" 00:15:09.878 } 00:15:09.878 ] 00:15:09.878 }, 00:15:09.878 { 00:15:09.878 "subsystem": "nbd", 00:15:09.878 "config": [] 00:15:09.878 } 00:15:09.878 ] 00:15:09.878 }' 00:15:09.878 21:29:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:09.878 21:29:35 -- common/autotest_common.sh@10 -- # set +x 00:15:09.878 [2024-04-24 21:29:35.403583] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:15:09.878 [2024-04-24 21:29:35.403709] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2609287 ] 00:15:09.878 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.878 [2024-04-24 21:29:35.462882] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.139 [2024-04-24 21:29:35.579906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.139 [2024-04-24 21:29:35.753592] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:11.078 21:29:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:11.078 21:29:36 -- common/autotest_common.sh@850 -- # return 0 00:15:11.078 21:29:36 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:11.078 21:29:36 -- target/tls.sh@275 -- # jq -r '.[].name' 00:15:11.078 21:29:36 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.078 21:29:36 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:11.078 Running I/O for 1 seconds... 00:15:12.461 00:15:12.461 Latency(us) 00:15:12.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.461 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:12.461 Verification LBA range: start 0x0 length 0x2000 00:15:12.461 nvme0n1 : 1.08 1086.55 4.24 0.00 0.00 114208.67 11747.93 166995.44 00:15:12.461 =================================================================================================================== 00:15:12.461 Total : 1086.55 4.24 0.00 0.00 114208.67 11747.93 166995.44 00:15:12.461 0 00:15:12.461 21:29:37 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:15:12.461 21:29:37 -- target/tls.sh@279 -- # cleanup 00:15:12.461 21:29:37 -- target/tls.sh@15 -- # process_shm --id 0 00:15:12.461 21:29:37 -- common/autotest_common.sh@794 -- # type=--id 00:15:12.461 21:29:37 -- common/autotest_common.sh@795 -- # id=0 00:15:12.461 21:29:37 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:15:12.461 21:29:37 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:12.461 21:29:37 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:15:12.461 21:29:37 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:15:12.461 21:29:37 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:15:12.461 21:29:37 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:12.461 nvmf_trace.0 00:15:12.461 21:29:37 -- common/autotest_common.sh@809 -- # return 0 00:15:12.461 21:29:37 -- target/tls.sh@16 -- # killprocess 2609287 00:15:12.461 21:29:37 -- common/autotest_common.sh@936 -- # '[' -z 2609287 ']' 00:15:12.461 21:29:37 -- common/autotest_common.sh@940 -- # kill -0 2609287 00:15:12.461 21:29:37 -- common/autotest_common.sh@941 -- # uname 00:15:12.461 21:29:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:12.461 21:29:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2609287 00:15:12.461 21:29:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:12.461 21:29:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:12.461 21:29:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2609287' 00:15:12.461 killing process with pid 2609287 00:15:12.461 21:29:37 -- common/autotest_common.sh@955 -- # kill 2609287 00:15:12.461 Received shutdown signal, test time was about 1.000000 seconds 00:15:12.461 00:15:12.461 Latency(us) 00:15:12.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.461 =================================================================================================================== 00:15:12.461 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:12.461 21:29:37 -- common/autotest_common.sh@960 -- # wait 2609287 00:15:12.721 21:29:38 -- target/tls.sh@17 -- # nvmftestfini 00:15:12.721 21:29:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:12.722 21:29:38 -- nvmf/common.sh@117 -- # sync 00:15:12.722 21:29:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:12.722 21:29:38 -- nvmf/common.sh@120 -- # set +e 00:15:12.722 21:29:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:12.722 21:29:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:12.722 rmmod nvme_tcp 00:15:12.722 rmmod nvme_fabrics 00:15:12.722 rmmod nvme_keyring 00:15:12.722 21:29:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:12.722 21:29:38 -- nvmf/common.sh@124 -- # set -e 00:15:12.722 21:29:38 -- nvmf/common.sh@125 -- # return 0 00:15:12.722 21:29:38 -- nvmf/common.sh@478 -- # '[' -n 2609137 ']' 00:15:12.722 21:29:38 -- nvmf/common.sh@479 -- # killprocess 2609137 00:15:12.722 21:29:38 -- common/autotest_common.sh@936 -- # '[' -z 2609137 ']' 00:15:12.722 21:29:38 -- common/autotest_common.sh@940 -- # kill -0 2609137 00:15:12.722 21:29:38 -- common/autotest_common.sh@941 -- # uname 00:15:12.722 21:29:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:12.722 21:29:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2609137 00:15:12.722 21:29:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:12.722 21:29:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:12.722 21:29:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2609137' 00:15:12.722 killing process with pid 2609137 00:15:12.722 21:29:38 -- common/autotest_common.sh@955 -- # kill 2609137 00:15:12.722 21:29:38 -- common/autotest_common.sh@960 -- # wait 2609137 00:15:12.981 21:29:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:12.981 21:29:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:12.981 21:29:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:12.981 21:29:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:12.981 21:29:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:12.981 21:29:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.981 21:29:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.981 21:29:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.523 21:29:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:15.523 21:29:40 -- target/tls.sh@18 -- # rm -f /tmp/tmp.xOe0tgigMk /tmp/tmp.itaE5SGSRx /tmp/tmp.vEaWycc1IQ 00:15:15.523 00:15:15.523 real 1m22.330s 00:15:15.523 user 2m8.459s 00:15:15.523 sys 0m29.437s 00:15:15.523 21:29:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:15.523 21:29:40 -- common/autotest_common.sh@10 -- # set +x 00:15:15.523 ************************************ 00:15:15.523 END TEST nvmf_tls 00:15:15.523 ************************************ 00:15:15.523 21:29:40 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:15.523 21:29:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:15.523 21:29:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:15.523 21:29:40 -- common/autotest_common.sh@10 -- # set +x 00:15:15.523 ************************************ 00:15:15.523 START TEST nvmf_fips 00:15:15.523 ************************************ 00:15:15.523 21:29:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:15.523 * Looking for test storage... 00:15:15.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:15:15.523 21:29:40 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:15.523 21:29:40 -- nvmf/common.sh@7 -- # uname -s 00:15:15.523 21:29:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.523 21:29:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.523 21:29:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.523 21:29:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.523 21:29:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.523 21:29:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.523 21:29:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.523 21:29:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.523 21:29:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.523 21:29:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.523 21:29:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:15.523 21:29:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:15.523 21:29:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.523 21:29:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.523 21:29:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:15.523 21:29:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:15.523 21:29:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:15.523 21:29:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.523 21:29:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.524 21:29:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.524 21:29:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.524 21:29:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.524 21:29:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.524 21:29:40 -- paths/export.sh@5 -- # export PATH 00:15:15.524 21:29:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.524 21:29:40 -- nvmf/common.sh@47 -- # : 0 00:15:15.524 21:29:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:15.524 21:29:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:15.524 21:29:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:15.524 21:29:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.524 21:29:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.524 21:29:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:15.524 21:29:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:15.524 21:29:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:15.524 21:29:40 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:15.524 21:29:40 -- fips/fips.sh@89 -- # check_openssl_version 00:15:15.524 21:29:40 -- fips/fips.sh@83 -- # local target=3.0.0 00:15:15.524 21:29:40 -- fips/fips.sh@85 -- # openssl version 00:15:15.524 21:29:40 -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:15.524 21:29:40 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:15.524 21:29:40 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:15.524 21:29:40 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:15.524 21:29:40 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:15.524 21:29:40 -- scripts/common.sh@333 -- # IFS=.-: 00:15:15.524 21:29:40 -- scripts/common.sh@333 -- # read -ra ver1 00:15:15.524 21:29:40 -- scripts/common.sh@334 -- # IFS=.-: 00:15:15.524 21:29:40 -- scripts/common.sh@334 -- # read -ra ver2 00:15:15.524 21:29:40 -- scripts/common.sh@335 -- # local 'op=>=' 00:15:15.524 21:29:40 -- scripts/common.sh@337 -- # ver1_l=3 00:15:15.524 21:29:40 -- scripts/common.sh@338 -- # ver2_l=3 00:15:15.524 21:29:40 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:15.524 21:29:40 -- scripts/common.sh@341 -- # case "$op" in 00:15:15.524 21:29:40 -- scripts/common.sh@345 -- # : 1 00:15:15.524 21:29:40 -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:15.524 21:29:40 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:15.524 21:29:40 -- scripts/common.sh@362 -- # decimal 3 00:15:15.524 21:29:40 -- scripts/common.sh@350 -- # local d=3 00:15:15.524 21:29:40 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:15.524 21:29:40 -- scripts/common.sh@352 -- # echo 3 00:15:15.524 21:29:40 -- scripts/common.sh@362 -- # ver1[v]=3 00:15:15.524 21:29:40 -- scripts/common.sh@363 -- # decimal 3 00:15:15.524 21:29:40 -- scripts/common.sh@350 -- # local d=3 00:15:15.524 21:29:40 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:15.524 21:29:40 -- scripts/common.sh@352 -- # echo 3 00:15:15.524 21:29:40 -- scripts/common.sh@363 -- # ver2[v]=3 00:15:15.524 21:29:40 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:15.524 21:29:40 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:15.524 21:29:40 -- scripts/common.sh@361 -- # (( v++ )) 00:15:15.524 21:29:40 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:15.524 21:29:40 -- scripts/common.sh@362 -- # decimal 0 00:15:15.524 21:29:40 -- scripts/common.sh@350 -- # local d=0 00:15:15.524 21:29:40 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:15.524 21:29:40 -- scripts/common.sh@352 -- # echo 0 00:15:15.524 21:29:40 -- scripts/common.sh@362 -- # ver1[v]=0 00:15:15.524 21:29:40 -- scripts/common.sh@363 -- # decimal 0 00:15:15.524 21:29:40 -- scripts/common.sh@350 -- # local d=0 00:15:15.524 21:29:40 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:15.524 21:29:40 -- scripts/common.sh@352 -- # echo 0 00:15:15.524 21:29:40 -- scripts/common.sh@363 -- # ver2[v]=0 00:15:15.524 21:29:40 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:15.524 21:29:40 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:15.524 21:29:40 -- scripts/common.sh@361 -- # (( v++ )) 00:15:15.524 21:29:40 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:15.524 21:29:40 -- scripts/common.sh@362 -- # decimal 9 00:15:15.524 21:29:40 -- scripts/common.sh@350 -- # local d=9 00:15:15.524 21:29:40 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:15:15.524 21:29:40 -- scripts/common.sh@352 -- # echo 9 00:15:15.524 21:29:40 -- scripts/common.sh@362 -- # ver1[v]=9 00:15:15.524 21:29:40 -- scripts/common.sh@363 -- # decimal 0 00:15:15.524 21:29:40 -- scripts/common.sh@350 -- # local d=0 00:15:15.524 21:29:40 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:15.524 21:29:40 -- scripts/common.sh@352 -- # echo 0 00:15:15.524 21:29:40 -- scripts/common.sh@363 -- # ver2[v]=0 00:15:15.524 21:29:40 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:15.524 21:29:40 -- scripts/common.sh@364 -- # return 0 00:15:15.524 21:29:40 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:15:15.524 21:29:40 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:15.524 21:29:40 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:15:15.524 21:29:40 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:15.524 21:29:40 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:15.524 21:29:40 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:15:15.524 21:29:40 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:15:15.524 21:29:40 -- fips/fips.sh@113 -- # build_openssl_config 00:15:15.524 21:29:40 -- fips/fips.sh@37 -- # cat 00:15:15.524 21:29:40 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:15:15.524 21:29:40 -- fips/fips.sh@58 -- # cat - 00:15:15.524 21:29:40 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:15.524 21:29:40 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:15:15.524 21:29:40 -- fips/fips.sh@116 -- # mapfile -t providers 00:15:15.524 21:29:40 -- fips/fips.sh@116 -- # openssl list -providers 00:15:15.524 21:29:40 -- fips/fips.sh@116 -- # grep name 00:15:15.524 21:29:40 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:15:15.524 21:29:40 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:15:15.524 21:29:40 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:15.524 21:29:40 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:15:15.524 21:29:40 -- fips/fips.sh@127 -- # : 00:15:15.524 21:29:40 -- common/autotest_common.sh@638 -- # local es=0 00:15:15.524 21:29:40 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:15.524 21:29:40 -- common/autotest_common.sh@626 -- # local arg=openssl 00:15:15.524 21:29:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:15.524 21:29:40 -- common/autotest_common.sh@630 -- # type -t openssl 00:15:15.524 21:29:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:15.524 21:29:40 -- common/autotest_common.sh@632 -- # type -P openssl 00:15:15.524 21:29:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:15.524 21:29:40 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:15:15.524 21:29:40 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:15:15.524 21:29:40 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:15:15.524 Error setting digest 00:15:15.524 0092906DAB7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:15:15.524 0092906DAB7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:15:15.524 21:29:40 -- common/autotest_common.sh@641 -- # es=1 00:15:15.524 21:29:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:15.524 21:29:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:15.524 21:29:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:15.524 21:29:40 -- fips/fips.sh@130 -- # nvmftestinit 00:15:15.524 21:29:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:15.524 21:29:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:15.524 21:29:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:15.524 21:29:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:15.524 21:29:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:15.524 21:29:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.524 21:29:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.524 21:29:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.524 21:29:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:15.524 21:29:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:15.524 21:29:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:15.524 21:29:40 -- common/autotest_common.sh@10 -- # set +x 00:15:17.433 21:29:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:17.433 21:29:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:17.433 21:29:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:17.433 21:29:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:17.433 21:29:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:17.433 21:29:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:17.433 21:29:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:17.433 21:29:42 -- nvmf/common.sh@295 -- # net_devs=() 00:15:17.433 21:29:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:17.433 21:29:42 -- nvmf/common.sh@296 -- # e810=() 00:15:17.433 21:29:42 -- nvmf/common.sh@296 -- # local -ga e810 00:15:17.433 21:29:42 -- nvmf/common.sh@297 -- # x722=() 00:15:17.433 21:29:42 -- nvmf/common.sh@297 -- # local -ga x722 00:15:17.433 21:29:42 -- nvmf/common.sh@298 -- # mlx=() 00:15:17.433 21:29:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:17.433 21:29:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.433 21:29:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.433 21:29:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.433 21:29:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.433 21:29:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.433 21:29:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.433 21:29:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.433 21:29:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.433 21:29:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.433 21:29:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.433 21:29:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.433 21:29:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:17.433 21:29:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:17.433 21:29:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:17.433 21:29:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:17.433 21:29:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:17.433 21:29:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:17.433 21:29:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.433 21:29:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:17.433 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:17.433 21:29:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:17.433 21:29:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:17.433 21:29:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.433 21:29:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.433 21:29:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:17.433 21:29:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.433 21:29:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:17.433 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:17.433 21:29:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:17.433 21:29:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:17.433 21:29:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.433 21:29:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.433 21:29:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:17.433 21:29:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:17.433 21:29:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:17.433 21:29:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:17.433 21:29:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.433 21:29:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.433 21:29:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:17.433 21:29:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.433 21:29:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:17.433 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:17.433 21:29:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.433 21:29:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.433 21:29:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.433 21:29:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:17.433 21:29:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.433 21:29:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:17.433 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:17.433 21:29:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.433 21:29:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:17.433 21:29:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:17.433 21:29:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:17.433 21:29:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:17.433 21:29:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:17.433 21:29:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.433 21:29:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.433 21:29:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:17.433 21:29:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:17.433 21:29:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:17.433 21:29:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:17.433 21:29:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:17.433 21:29:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:17.433 21:29:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.433 21:29:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:17.433 21:29:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:17.433 21:29:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:17.433 21:29:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:17.433 21:29:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:17.433 21:29:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:17.433 21:29:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:17.433 21:29:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:17.433 21:29:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:17.433 21:29:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:17.433 21:29:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:17.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:15:17.433 00:15:17.433 --- 10.0.0.2 ping statistics --- 00:15:17.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.433 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:15:17.433 21:29:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:17.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:15:17.433 00:15:17.433 --- 10.0.0.1 ping statistics --- 00:15:17.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.433 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:15:17.433 21:29:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.433 21:29:43 -- nvmf/common.sh@411 -- # return 0 00:15:17.433 21:29:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:17.433 21:29:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.433 21:29:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:17.433 21:29:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:17.433 21:29:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.433 21:29:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:17.433 21:29:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:17.433 21:29:43 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:15:17.433 21:29:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:17.433 21:29:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:17.433 21:29:43 -- common/autotest_common.sh@10 -- # set +x 00:15:17.433 21:29:43 -- nvmf/common.sh@470 -- # nvmfpid=2611598 00:15:17.433 21:29:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:17.433 21:29:43 -- nvmf/common.sh@471 -- # waitforlisten 2611598 00:15:17.433 21:29:43 -- common/autotest_common.sh@817 -- # '[' -z 2611598 ']' 00:15:17.433 21:29:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.433 21:29:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:17.433 21:29:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.433 21:29:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:17.433 21:29:43 -- common/autotest_common.sh@10 -- # set +x 00:15:17.692 [2024-04-24 21:29:43.179839] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:15:17.692 [2024-04-24 21:29:43.179922] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.692 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.692 [2024-04-24 21:29:43.244228] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.692 [2024-04-24 21:29:43.353878] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.692 [2024-04-24 21:29:43.353954] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.692 [2024-04-24 21:29:43.353981] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.692 [2024-04-24 21:29:43.353995] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.692 [2024-04-24 21:29:43.354006] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.692 [2024-04-24 21:29:43.354038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.627 21:29:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:18.627 21:29:44 -- common/autotest_common.sh@850 -- # return 0 00:15:18.627 21:29:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:18.627 21:29:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:18.627 21:29:44 -- common/autotest_common.sh@10 -- # set +x 00:15:18.627 21:29:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.627 21:29:44 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:18.627 21:29:44 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:18.627 21:29:44 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:18.627 21:29:44 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:18.627 21:29:44 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:18.627 21:29:44 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:18.627 21:29:44 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:18.627 21:29:44 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.886 [2024-04-24 21:29:44.384178] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:18.886 [2024-04-24 21:29:44.400179] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:18.886 [2024-04-24 21:29:44.400404] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:18.886 [2024-04-24 21:29:44.432762] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:18.886 malloc0 00:15:18.886 21:29:44 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:18.886 21:29:44 -- fips/fips.sh@147 -- # bdevperf_pid=2611811 00:15:18.886 21:29:44 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:18.886 21:29:44 -- fips/fips.sh@148 -- # waitforlisten 2611811 /var/tmp/bdevperf.sock 00:15:18.886 21:29:44 -- common/autotest_common.sh@817 -- # '[' -z 2611811 ']' 00:15:18.886 21:29:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:18.886 21:29:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:18.886 21:29:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:18.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:18.886 21:29:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:18.886 21:29:44 -- common/autotest_common.sh@10 -- # set +x 00:15:18.886 [2024-04-24 21:29:44.522231] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:15:18.886 [2024-04-24 21:29:44.522323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2611811 ] 00:15:18.886 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.146 [2024-04-24 21:29:44.584903] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.146 [2024-04-24 21:29:44.694514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.084 21:29:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:20.084 21:29:45 -- common/autotest_common.sh@850 -- # return 0 00:15:20.084 21:29:45 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:20.084 [2024-04-24 21:29:45.713802] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:20.084 [2024-04-24 21:29:45.713936] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:20.344 TLSTESTn1 00:15:20.344 21:29:45 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:20.344 Running I/O for 10 seconds... 00:15:32.557 00:15:32.557 Latency(us) 00:15:32.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.557 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:32.557 Verification LBA range: start 0x0 length 0x2000 00:15:32.557 TLSTESTn1 : 10.07 1366.39 5.34 0.00 0.00 93405.15 6165.24 117285.17 00:15:32.557 =================================================================================================================== 00:15:32.557 Total : 1366.39 5.34 0.00 0.00 93405.15 6165.24 117285.17 00:15:32.557 0 00:15:32.557 21:29:56 -- fips/fips.sh@1 -- # cleanup 00:15:32.557 21:29:56 -- fips/fips.sh@15 -- # process_shm --id 0 00:15:32.557 21:29:56 -- common/autotest_common.sh@794 -- # type=--id 00:15:32.557 21:29:56 -- common/autotest_common.sh@795 -- # id=0 00:15:32.557 21:29:56 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:15:32.557 21:29:56 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:32.557 21:29:56 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:15:32.557 21:29:56 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:15:32.557 21:29:56 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:15:32.557 21:29:56 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:32.557 nvmf_trace.0 00:15:32.557 21:29:56 -- common/autotest_common.sh@809 -- # return 0 00:15:32.557 21:29:56 -- fips/fips.sh@16 -- # killprocess 2611811 00:15:32.557 21:29:56 -- common/autotest_common.sh@936 -- # '[' -z 2611811 ']' 00:15:32.557 21:29:56 -- common/autotest_common.sh@940 -- # kill -0 2611811 00:15:32.557 21:29:56 -- common/autotest_common.sh@941 -- # uname 00:15:32.557 21:29:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:32.557 21:29:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2611811 00:15:32.557 21:29:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:32.557 21:29:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:32.557 21:29:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2611811' 00:15:32.557 killing process with pid 2611811 00:15:32.557 21:29:56 -- common/autotest_common.sh@955 -- # kill 2611811 00:15:32.557 Received shutdown signal, test time was about 10.000000 seconds 00:15:32.557 00:15:32.557 Latency(us) 00:15:32.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.557 =================================================================================================================== 00:15:32.557 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:32.557 [2024-04-24 21:29:56.102920] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:32.557 21:29:56 -- common/autotest_common.sh@960 -- # wait 2611811 00:15:32.557 21:29:56 -- fips/fips.sh@17 -- # nvmftestfini 00:15:32.557 21:29:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:32.557 21:29:56 -- nvmf/common.sh@117 -- # sync 00:15:32.557 21:29:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:32.557 21:29:56 -- nvmf/common.sh@120 -- # set +e 00:15:32.557 21:29:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:32.557 21:29:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:32.557 rmmod nvme_tcp 00:15:32.557 rmmod nvme_fabrics 00:15:32.557 rmmod nvme_keyring 00:15:32.557 21:29:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:32.557 21:29:56 -- nvmf/common.sh@124 -- # set -e 00:15:32.557 21:29:56 -- nvmf/common.sh@125 -- # return 0 00:15:32.557 21:29:56 -- nvmf/common.sh@478 -- # '[' -n 2611598 ']' 00:15:32.557 21:29:56 -- nvmf/common.sh@479 -- # killprocess 2611598 00:15:32.557 21:29:56 -- common/autotest_common.sh@936 -- # '[' -z 2611598 ']' 00:15:32.557 21:29:56 -- common/autotest_common.sh@940 -- # kill -0 2611598 00:15:32.557 21:29:56 -- common/autotest_common.sh@941 -- # uname 00:15:32.557 21:29:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:32.557 21:29:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2611598 00:15:32.557 21:29:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:32.557 21:29:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:32.557 21:29:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2611598' 00:15:32.557 killing process with pid 2611598 00:15:32.557 21:29:56 -- common/autotest_common.sh@955 -- # kill 2611598 00:15:32.557 [2024-04-24 21:29:56.458761] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:32.557 21:29:56 -- common/autotest_common.sh@960 -- # wait 2611598 00:15:32.557 21:29:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:32.557 21:29:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:32.557 21:29:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:32.557 21:29:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:32.557 21:29:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:32.557 21:29:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.557 21:29:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.557 21:29:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.128 21:29:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:33.128 21:29:58 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:33.128 00:15:33.128 real 0m18.063s 00:15:33.128 user 0m23.009s 00:15:33.128 sys 0m6.614s 00:15:33.128 21:29:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:33.128 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:15:33.128 ************************************ 00:15:33.128 END TEST nvmf_fips 00:15:33.128 ************************************ 00:15:33.387 21:29:58 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:15:33.387 21:29:58 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:15:33.387 21:29:58 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:15:33.387 21:29:58 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:15:33.387 21:29:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:33.387 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:15:35.291 21:30:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:35.291 21:30:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:35.291 21:30:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:35.291 21:30:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:35.291 21:30:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:35.291 21:30:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:35.291 21:30:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:35.291 21:30:00 -- nvmf/common.sh@295 -- # net_devs=() 00:15:35.291 21:30:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:35.291 21:30:00 -- nvmf/common.sh@296 -- # e810=() 00:15:35.291 21:30:00 -- nvmf/common.sh@296 -- # local -ga e810 00:15:35.291 21:30:00 -- nvmf/common.sh@297 -- # x722=() 00:15:35.291 21:30:00 -- nvmf/common.sh@297 -- # local -ga x722 00:15:35.291 21:30:00 -- nvmf/common.sh@298 -- # mlx=() 00:15:35.291 21:30:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:35.291 21:30:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.292 21:30:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.292 21:30:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.292 21:30:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.292 21:30:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.292 21:30:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.292 21:30:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.292 21:30:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.292 21:30:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.292 21:30:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.292 21:30:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.292 21:30:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:35.292 21:30:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:35.292 21:30:00 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:35.292 21:30:00 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:35.292 21:30:00 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:35.292 21:30:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:35.292 21:30:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.292 21:30:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:35.292 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:35.292 21:30:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.292 21:30:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.292 21:30:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.292 21:30:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.292 21:30:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.292 21:30:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.292 21:30:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:35.292 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:35.292 21:30:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.292 21:30:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.292 21:30:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.292 21:30:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.292 21:30:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.292 21:30:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:35.292 21:30:00 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:35.292 21:30:00 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:35.292 21:30:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.292 21:30:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.292 21:30:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:35.292 21:30:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.292 21:30:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:35.292 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:35.292 21:30:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.292 21:30:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.292 21:30:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.292 21:30:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:35.292 21:30:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.292 21:30:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:35.292 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:35.292 21:30:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.292 21:30:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:35.292 21:30:00 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:35.292 21:30:00 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:15:35.292 21:30:00 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:15:35.292 21:30:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:35.292 21:30:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:35.292 21:30:00 -- common/autotest_common.sh@10 -- # set +x 00:15:35.551 ************************************ 00:15:35.551 START TEST nvmf_perf_adq 00:15:35.551 ************************************ 00:15:35.551 21:30:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:15:35.551 * Looking for test storage... 00:15:35.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:35.551 21:30:01 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:35.551 21:30:01 -- nvmf/common.sh@7 -- # uname -s 00:15:35.551 21:30:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.551 21:30:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.551 21:30:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.551 21:30:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.551 21:30:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.551 21:30:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.551 21:30:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.551 21:30:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.551 21:30:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.551 21:30:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.551 21:30:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:35.551 21:30:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:35.551 21:30:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.551 21:30:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.551 21:30:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:35.551 21:30:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:35.551 21:30:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:35.551 21:30:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.551 21:30:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.551 21:30:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.551 21:30:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.551 21:30:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.551 21:30:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.551 21:30:01 -- paths/export.sh@5 -- # export PATH 00:15:35.551 21:30:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.551 21:30:01 -- nvmf/common.sh@47 -- # : 0 00:15:35.551 21:30:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:35.551 21:30:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:35.551 21:30:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:35.551 21:30:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.551 21:30:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.551 21:30:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:35.551 21:30:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:35.551 21:30:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:35.551 21:30:01 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:15:35.551 21:30:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:35.551 21:30:01 -- common/autotest_common.sh@10 -- # set +x 00:15:37.454 21:30:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:37.454 21:30:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:37.454 21:30:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:37.454 21:30:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:37.454 21:30:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:37.454 21:30:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:37.454 21:30:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:37.454 21:30:02 -- nvmf/common.sh@295 -- # net_devs=() 00:15:37.454 21:30:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:37.454 21:30:02 -- nvmf/common.sh@296 -- # e810=() 00:15:37.454 21:30:02 -- nvmf/common.sh@296 -- # local -ga e810 00:15:37.454 21:30:02 -- nvmf/common.sh@297 -- # x722=() 00:15:37.454 21:30:02 -- nvmf/common.sh@297 -- # local -ga x722 00:15:37.454 21:30:02 -- nvmf/common.sh@298 -- # mlx=() 00:15:37.454 21:30:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:37.454 21:30:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:37.454 21:30:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:37.454 21:30:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:37.455 21:30:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:37.455 21:30:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:37.455 21:30:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:37.455 21:30:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:37.455 21:30:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:37.455 21:30:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:37.455 21:30:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:37.455 21:30:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:37.455 21:30:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:37.455 21:30:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:37.455 21:30:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:37.455 21:30:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:37.455 21:30:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:37.455 21:30:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:37.455 21:30:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:37.455 21:30:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:37.455 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:37.455 21:30:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:37.455 21:30:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:37.455 21:30:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.455 21:30:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.455 21:30:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:37.455 21:30:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:37.455 21:30:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:37.455 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:37.455 21:30:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:37.455 21:30:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:37.455 21:30:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.455 21:30:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.455 21:30:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:37.455 21:30:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:37.455 21:30:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:37.455 21:30:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:37.455 21:30:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:37.455 21:30:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.455 21:30:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:37.455 21:30:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.455 21:30:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:37.455 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:37.455 21:30:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.455 21:30:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:37.455 21:30:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.455 21:30:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:37.455 21:30:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.455 21:30:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:37.455 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:37.455 21:30:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.455 21:30:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:37.455 21:30:02 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:37.455 21:30:02 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:15:37.455 21:30:02 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:15:37.455 21:30:02 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:15:37.455 21:30:02 -- target/perf_adq.sh@52 -- # rmmod ice 00:15:38.392 21:30:03 -- target/perf_adq.sh@53 -- # modprobe ice 00:15:40.298 21:30:05 -- target/perf_adq.sh@54 -- # sleep 5 00:15:45.578 21:30:10 -- target/perf_adq.sh@67 -- # nvmftestinit 00:15:45.578 21:30:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:45.578 21:30:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.578 21:30:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:45.578 21:30:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:45.578 21:30:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:45.578 21:30:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.578 21:30:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.578 21:30:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.578 21:30:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:45.578 21:30:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:45.578 21:30:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:45.578 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:15:45.578 21:30:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:45.578 21:30:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:45.578 21:30:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:45.578 21:30:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:45.578 21:30:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:45.578 21:30:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:45.578 21:30:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:45.578 21:30:10 -- nvmf/common.sh@295 -- # net_devs=() 00:15:45.578 21:30:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:45.578 21:30:10 -- nvmf/common.sh@296 -- # e810=() 00:15:45.578 21:30:10 -- nvmf/common.sh@296 -- # local -ga e810 00:15:45.578 21:30:10 -- nvmf/common.sh@297 -- # x722=() 00:15:45.578 21:30:10 -- nvmf/common.sh@297 -- # local -ga x722 00:15:45.578 21:30:10 -- nvmf/common.sh@298 -- # mlx=() 00:15:45.578 21:30:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:45.578 21:30:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:45.578 21:30:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:45.578 21:30:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:45.578 21:30:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:45.578 21:30:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:45.578 21:30:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:45.578 21:30:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:45.578 21:30:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:45.578 21:30:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:45.578 21:30:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:45.578 21:30:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:45.578 21:30:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:45.578 21:30:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:45.578 21:30:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:45.578 21:30:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:45.578 21:30:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:45.578 21:30:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:45.578 21:30:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:45.578 21:30:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:45.578 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:45.578 21:30:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:45.578 21:30:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:45.578 21:30:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:45.578 21:30:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:45.578 21:30:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:45.578 21:30:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:45.578 21:30:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:45.578 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:45.578 21:30:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:45.578 21:30:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:45.578 21:30:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:45.578 21:30:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:45.578 21:30:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:45.578 21:30:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:45.578 21:30:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:45.578 21:30:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:45.578 21:30:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:45.578 21:30:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:45.578 21:30:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:45.578 21:30:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:45.578 21:30:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:45.578 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:45.578 21:30:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:45.578 21:30:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:45.578 21:30:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:45.578 21:30:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:45.578 21:30:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:45.578 21:30:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:45.578 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:45.578 21:30:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:45.578 21:30:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:45.578 21:30:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:45.578 21:30:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:45.578 21:30:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:45.578 21:30:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:45.578 21:30:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:45.578 21:30:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:45.578 21:30:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:45.578 21:30:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:45.578 21:30:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:45.578 21:30:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:45.578 21:30:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:45.578 21:30:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:45.578 21:30:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:45.578 21:30:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:45.578 21:30:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:45.578 21:30:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:45.578 21:30:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:45.578 21:30:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:45.578 21:30:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:45.578 21:30:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:45.578 21:30:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:45.578 21:30:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:45.578 21:30:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:45.578 21:30:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:45.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:45.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:15:45.578 00:15:45.578 --- 10.0.0.2 ping statistics --- 00:15:45.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.578 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:15:45.578 21:30:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:45.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:45.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:15:45.578 00:15:45.578 --- 10.0.0.1 ping statistics --- 00:15:45.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.578 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:15:45.578 21:30:10 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:45.578 21:30:10 -- nvmf/common.sh@411 -- # return 0 00:15:45.578 21:30:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:45.578 21:30:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:45.578 21:30:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:45.578 21:30:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:45.578 21:30:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:45.578 21:30:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:45.578 21:30:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:45.578 21:30:10 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:45.578 21:30:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:45.578 21:30:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:45.578 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:15:45.578 21:30:10 -- nvmf/common.sh@470 -- # nvmfpid=2618313 00:15:45.578 21:30:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:45.578 21:30:10 -- nvmf/common.sh@471 -- # waitforlisten 2618313 00:15:45.578 21:30:10 -- common/autotest_common.sh@817 -- # '[' -z 2618313 ']' 00:15:45.578 21:30:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.578 21:30:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:45.578 21:30:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.578 21:30:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:45.578 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:15:45.578 [2024-04-24 21:30:10.865025] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:15:45.578 [2024-04-24 21:30:10.865103] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.578 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.579 [2024-04-24 21:30:10.929472] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:45.579 [2024-04-24 21:30:11.040150] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:45.579 [2024-04-24 21:30:11.040211] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:45.579 [2024-04-24 21:30:11.040224] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:45.579 [2024-04-24 21:30:11.040235] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:45.579 [2024-04-24 21:30:11.040245] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:45.579 [2024-04-24 21:30:11.040308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.579 [2024-04-24 21:30:11.040368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.579 [2024-04-24 21:30:11.040434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:45.579 [2024-04-24 21:30:11.040437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.579 21:30:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:45.579 21:30:11 -- common/autotest_common.sh@850 -- # return 0 00:15:45.579 21:30:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:45.579 21:30:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:45.579 21:30:11 -- common/autotest_common.sh@10 -- # set +x 00:15:45.579 21:30:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:45.579 21:30:11 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:15:45.579 21:30:11 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:15:45.579 21:30:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.579 21:30:11 -- common/autotest_common.sh@10 -- # set +x 00:15:45.579 21:30:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.579 21:30:11 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:15:45.579 21:30:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.579 21:30:11 -- common/autotest_common.sh@10 -- # set +x 00:15:45.579 21:30:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.579 21:30:11 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:15:45.579 21:30:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.579 21:30:11 -- common/autotest_common.sh@10 -- # set +x 00:15:45.579 [2024-04-24 21:30:11.211574] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:45.579 21:30:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.579 21:30:11 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:45.579 21:30:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.579 21:30:11 -- common/autotest_common.sh@10 -- # set +x 00:15:45.579 Malloc1 00:15:45.579 21:30:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.579 21:30:11 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:45.579 21:30:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.579 21:30:11 -- common/autotest_common.sh@10 -- # set +x 00:15:45.579 21:30:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.579 21:30:11 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:45.579 21:30:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.579 21:30:11 -- common/autotest_common.sh@10 -- # set +x 00:15:45.839 21:30:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.839 21:30:11 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.839 21:30:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.839 21:30:11 -- common/autotest_common.sh@10 -- # set +x 00:15:45.839 [2024-04-24 21:30:11.262658] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.839 21:30:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.839 21:30:11 -- target/perf_adq.sh@73 -- # perfpid=2618338 00:15:45.839 21:30:11 -- target/perf_adq.sh@74 -- # sleep 2 00:15:45.839 21:30:11 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:45.839 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.743 21:30:13 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:15:47.743 21:30:13 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:15:47.743 21:30:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.743 21:30:13 -- target/perf_adq.sh@76 -- # wc -l 00:15:47.743 21:30:13 -- common/autotest_common.sh@10 -- # set +x 00:15:47.743 21:30:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.743 21:30:13 -- target/perf_adq.sh@76 -- # count=4 00:15:47.743 21:30:13 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:15:47.743 21:30:13 -- target/perf_adq.sh@81 -- # wait 2618338 00:15:55.854 Initializing NVMe Controllers 00:15:55.854 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:55.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:15:55.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:15:55.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:15:55.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:15:55.854 Initialization complete. Launching workers. 00:15:55.854 ======================================================== 00:15:55.854 Latency(us) 00:15:55.854 Device Information : IOPS MiB/s Average min max 00:15:55.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10264.00 40.09 6234.93 2441.39 9370.05 00:15:55.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10592.90 41.38 6042.44 2239.57 8612.35 00:15:55.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10405.20 40.65 6150.52 2102.11 9028.43 00:15:55.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9228.80 36.05 6937.17 5770.22 8967.70 00:15:55.854 ======================================================== 00:15:55.854 Total : 40490.89 158.17 6322.94 2102.11 9370.05 00:15:55.854 00:15:55.854 21:30:21 -- target/perf_adq.sh@82 -- # nvmftestfini 00:15:55.854 21:30:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:55.854 21:30:21 -- nvmf/common.sh@117 -- # sync 00:15:55.854 21:30:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:55.854 21:30:21 -- nvmf/common.sh@120 -- # set +e 00:15:55.854 21:30:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:55.854 21:30:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:55.854 rmmod nvme_tcp 00:15:55.854 rmmod nvme_fabrics 00:15:55.854 rmmod nvme_keyring 00:15:55.854 21:30:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:55.854 21:30:21 -- nvmf/common.sh@124 -- # set -e 00:15:55.854 21:30:21 -- nvmf/common.sh@125 -- # return 0 00:15:55.854 21:30:21 -- nvmf/common.sh@478 -- # '[' -n 2618313 ']' 00:15:55.854 21:30:21 -- nvmf/common.sh@479 -- # killprocess 2618313 00:15:55.854 21:30:21 -- common/autotest_common.sh@936 -- # '[' -z 2618313 ']' 00:15:55.854 21:30:21 -- common/autotest_common.sh@940 -- # kill -0 2618313 00:15:55.854 21:30:21 -- common/autotest_common.sh@941 -- # uname 00:15:55.854 21:30:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:55.854 21:30:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2618313 00:15:55.854 21:30:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:55.854 21:30:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:55.854 21:30:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2618313' 00:15:55.854 killing process with pid 2618313 00:15:55.854 21:30:21 -- common/autotest_common.sh@955 -- # kill 2618313 00:15:55.855 21:30:21 -- common/autotest_common.sh@960 -- # wait 2618313 00:15:56.118 21:30:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:56.118 21:30:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:56.118 21:30:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:56.118 21:30:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.118 21:30:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:56.118 21:30:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.118 21:30:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.118 21:30:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.661 21:30:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:58.661 21:30:23 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:15:58.661 21:30:23 -- target/perf_adq.sh@52 -- # rmmod ice 00:15:58.919 21:30:24 -- target/perf_adq.sh@53 -- # modprobe ice 00:16:00.826 21:30:26 -- target/perf_adq.sh@54 -- # sleep 5 00:16:06.106 21:30:31 -- target/perf_adq.sh@87 -- # nvmftestinit 00:16:06.106 21:30:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:06.106 21:30:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.106 21:30:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:06.106 21:30:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:06.106 21:30:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:06.106 21:30:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.106 21:30:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.106 21:30:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.106 21:30:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:06.106 21:30:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:06.106 21:30:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:06.106 21:30:31 -- common/autotest_common.sh@10 -- # set +x 00:16:06.106 21:30:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:06.106 21:30:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:06.106 21:30:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:06.106 21:30:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:06.106 21:30:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:06.106 21:30:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:06.106 21:30:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:06.106 21:30:31 -- nvmf/common.sh@295 -- # net_devs=() 00:16:06.106 21:30:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:06.106 21:30:31 -- nvmf/common.sh@296 -- # e810=() 00:16:06.106 21:30:31 -- nvmf/common.sh@296 -- # local -ga e810 00:16:06.106 21:30:31 -- nvmf/common.sh@297 -- # x722=() 00:16:06.106 21:30:31 -- nvmf/common.sh@297 -- # local -ga x722 00:16:06.106 21:30:31 -- nvmf/common.sh@298 -- # mlx=() 00:16:06.106 21:30:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:06.106 21:30:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:06.106 21:30:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:06.106 21:30:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:06.106 21:30:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:06.106 21:30:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:06.106 21:30:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:06.106 21:30:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:06.106 21:30:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:06.106 21:30:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:06.106 21:30:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:06.106 21:30:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:06.106 21:30:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:06.106 21:30:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:06.106 21:30:31 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:06.107 21:30:31 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:06.107 21:30:31 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:06.107 21:30:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:06.107 21:30:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.107 21:30:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:06.107 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:06.107 21:30:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.107 21:30:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.107 21:30:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.107 21:30:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.107 21:30:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.107 21:30:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.107 21:30:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:06.107 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:06.107 21:30:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.107 21:30:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.107 21:30:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.107 21:30:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.107 21:30:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.107 21:30:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:06.107 21:30:31 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:06.107 21:30:31 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:06.107 21:30:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.107 21:30:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.107 21:30:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:06.107 21:30:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.107 21:30:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:06.107 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:06.107 21:30:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.107 21:30:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.107 21:30:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.107 21:30:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:06.107 21:30:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.107 21:30:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:06.107 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:06.107 21:30:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.107 21:30:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:06.107 21:30:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:06.107 21:30:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:06.107 21:30:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:06.107 21:30:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:06.107 21:30:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.107 21:30:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.107 21:30:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:06.107 21:30:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:06.107 21:30:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:06.107 21:30:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:06.107 21:30:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:06.107 21:30:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:06.107 21:30:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.107 21:30:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:06.107 21:30:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:06.107 21:30:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:06.107 21:30:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:06.107 21:30:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:06.107 21:30:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:06.107 21:30:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:06.107 21:30:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:06.107 21:30:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:06.107 21:30:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:06.107 21:30:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:06.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:16:06.107 00:16:06.107 --- 10.0.0.2 ping statistics --- 00:16:06.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.107 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:16:06.107 21:30:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:06.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:16:06.107 00:16:06.107 --- 10.0.0.1 ping statistics --- 00:16:06.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.107 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:16:06.107 21:30:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.107 21:30:31 -- nvmf/common.sh@411 -- # return 0 00:16:06.107 21:30:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:06.107 21:30:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.107 21:30:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:06.107 21:30:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:06.107 21:30:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.107 21:30:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:06.107 21:30:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:06.107 21:30:31 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:16:06.107 21:30:31 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:16:06.107 21:30:31 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:16:06.107 21:30:31 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:16:06.107 net.core.busy_poll = 1 00:16:06.107 21:30:31 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:16:06.107 net.core.busy_read = 1 00:16:06.107 21:30:31 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:16:06.107 21:30:31 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:16:06.107 21:30:31 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:16:06.107 21:30:31 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:16:06.107 21:30:31 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:16:06.107 21:30:31 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:06.107 21:30:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:06.107 21:30:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:06.107 21:30:31 -- common/autotest_common.sh@10 -- # set +x 00:16:06.107 21:30:31 -- nvmf/common.sh@470 -- # nvmfpid=2620960 00:16:06.107 21:30:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:06.107 21:30:31 -- nvmf/common.sh@471 -- # waitforlisten 2620960 00:16:06.107 21:30:31 -- common/autotest_common.sh@817 -- # '[' -z 2620960 ']' 00:16:06.107 21:30:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.107 21:30:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:06.107 21:30:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.107 21:30:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:06.107 21:30:31 -- common/autotest_common.sh@10 -- # set +x 00:16:06.367 [2024-04-24 21:30:31.802789] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:16:06.367 [2024-04-24 21:30:31.802881] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.367 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.367 [2024-04-24 21:30:31.869681] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:06.367 [2024-04-24 21:30:31.975394] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.367 [2024-04-24 21:30:31.975450] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.367 [2024-04-24 21:30:31.975479] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:06.367 [2024-04-24 21:30:31.975489] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:06.367 [2024-04-24 21:30:31.975499] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.367 [2024-04-24 21:30:31.975670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.367 [2024-04-24 21:30:31.975701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.367 [2024-04-24 21:30:31.975762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:06.367 [2024-04-24 21:30:31.975765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.367 21:30:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:06.367 21:30:32 -- common/autotest_common.sh@850 -- # return 0 00:16:06.367 21:30:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:06.367 21:30:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:06.367 21:30:32 -- common/autotest_common.sh@10 -- # set +x 00:16:06.367 21:30:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.367 21:30:32 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:16:06.367 21:30:32 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:16:06.367 21:30:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.367 21:30:32 -- common/autotest_common.sh@10 -- # set +x 00:16:06.626 21:30:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.626 21:30:32 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:16:06.626 21:30:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.626 21:30:32 -- common/autotest_common.sh@10 -- # set +x 00:16:06.626 21:30:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.626 21:30:32 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:16:06.626 21:30:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.626 21:30:32 -- common/autotest_common.sh@10 -- # set +x 00:16:06.626 [2024-04-24 21:30:32.159578] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.626 21:30:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.626 21:30:32 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:06.626 21:30:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.626 21:30:32 -- common/autotest_common.sh@10 -- # set +x 00:16:06.626 Malloc1 00:16:06.626 21:30:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.626 21:30:32 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:06.626 21:30:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.626 21:30:32 -- common/autotest_common.sh@10 -- # set +x 00:16:06.626 21:30:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.626 21:30:32 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:06.626 21:30:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.626 21:30:32 -- common/autotest_common.sh@10 -- # set +x 00:16:06.626 21:30:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.626 21:30:32 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:06.626 21:30:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.626 21:30:32 -- common/autotest_common.sh@10 -- # set +x 00:16:06.626 [2024-04-24 21:30:32.212991] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.626 21:30:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.626 21:30:32 -- target/perf_adq.sh@94 -- # perfpid=2621109 00:16:06.626 21:30:32 -- target/perf_adq.sh@95 -- # sleep 2 00:16:06.627 21:30:32 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:06.627 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.172 21:30:34 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:16:09.172 21:30:34 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:16:09.172 21:30:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.172 21:30:34 -- target/perf_adq.sh@97 -- # wc -l 00:16:09.172 21:30:34 -- common/autotest_common.sh@10 -- # set +x 00:16:09.172 21:30:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.172 21:30:34 -- target/perf_adq.sh@97 -- # count=2 00:16:09.172 21:30:34 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:16:09.172 21:30:34 -- target/perf_adq.sh@103 -- # wait 2621109 00:16:17.287 Initializing NVMe Controllers 00:16:17.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:17.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:16:17.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:16:17.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:16:17.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:16:17.287 Initialization complete. Launching workers. 00:16:17.287 ======================================================== 00:16:17.287 Latency(us) 00:16:17.287 Device Information : IOPS MiB/s Average min max 00:16:17.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5018.70 19.60 12783.57 2622.46 57241.10 00:16:17.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6821.10 26.64 9393.18 1835.76 53796.50 00:16:17.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7043.80 27.51 9102.80 1776.34 53482.83 00:16:17.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5747.20 22.45 11161.39 1802.35 58301.22 00:16:17.287 ======================================================== 00:16:17.287 Total : 24630.80 96.21 10413.54 1776.34 58301.22 00:16:17.287 00:16:17.287 21:30:42 -- target/perf_adq.sh@104 -- # nvmftestfini 00:16:17.287 21:30:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:17.287 21:30:42 -- nvmf/common.sh@117 -- # sync 00:16:17.287 21:30:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:17.287 21:30:42 -- nvmf/common.sh@120 -- # set +e 00:16:17.287 21:30:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:17.287 21:30:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:17.287 rmmod nvme_tcp 00:16:17.287 rmmod nvme_fabrics 00:16:17.287 rmmod nvme_keyring 00:16:17.287 21:30:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:17.287 21:30:42 -- nvmf/common.sh@124 -- # set -e 00:16:17.287 21:30:42 -- nvmf/common.sh@125 -- # return 0 00:16:17.287 21:30:42 -- nvmf/common.sh@478 -- # '[' -n 2620960 ']' 00:16:17.287 21:30:42 -- nvmf/common.sh@479 -- # killprocess 2620960 00:16:17.287 21:30:42 -- common/autotest_common.sh@936 -- # '[' -z 2620960 ']' 00:16:17.287 21:30:42 -- common/autotest_common.sh@940 -- # kill -0 2620960 00:16:17.287 21:30:42 -- common/autotest_common.sh@941 -- # uname 00:16:17.287 21:30:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:17.287 21:30:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2620960 00:16:17.287 21:30:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:17.287 21:30:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:17.287 21:30:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2620960' 00:16:17.287 killing process with pid 2620960 00:16:17.287 21:30:42 -- common/autotest_common.sh@955 -- # kill 2620960 00:16:17.287 21:30:42 -- common/autotest_common.sh@960 -- # wait 2620960 00:16:17.287 21:30:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:17.287 21:30:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:17.287 21:30:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:17.287 21:30:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:17.287 21:30:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:17.287 21:30:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.287 21:30:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.287 21:30:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.826 21:30:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:19.826 21:30:44 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:16:19.826 00:16:19.826 real 0m43.901s 00:16:19.826 user 2m34.387s 00:16:19.826 sys 0m11.641s 00:16:19.826 21:30:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:19.826 21:30:44 -- common/autotest_common.sh@10 -- # set +x 00:16:19.826 ************************************ 00:16:19.826 END TEST nvmf_perf_adq 00:16:19.826 ************************************ 00:16:19.826 21:30:44 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:16:19.826 21:30:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:19.826 21:30:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:19.826 21:30:44 -- common/autotest_common.sh@10 -- # set +x 00:16:19.826 ************************************ 00:16:19.826 START TEST nvmf_shutdown 00:16:19.826 ************************************ 00:16:19.826 21:30:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:16:19.826 * Looking for test storage... 00:16:19.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:19.826 21:30:45 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:19.826 21:30:45 -- nvmf/common.sh@7 -- # uname -s 00:16:19.826 21:30:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.826 21:30:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.826 21:30:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.826 21:30:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.826 21:30:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.826 21:30:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.826 21:30:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.826 21:30:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.826 21:30:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.826 21:30:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.826 21:30:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:19.826 21:30:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:19.826 21:30:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.826 21:30:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.827 21:30:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:19.827 21:30:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.827 21:30:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:19.827 21:30:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.827 21:30:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.827 21:30:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.827 21:30:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.827 21:30:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.827 21:30:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.827 21:30:45 -- paths/export.sh@5 -- # export PATH 00:16:19.827 21:30:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.827 21:30:45 -- nvmf/common.sh@47 -- # : 0 00:16:19.827 21:30:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:19.827 21:30:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:19.827 21:30:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.827 21:30:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.827 21:30:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.827 21:30:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:19.827 21:30:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:19.827 21:30:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:19.827 21:30:45 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:19.827 21:30:45 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:19.827 21:30:45 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:16:19.827 21:30:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:19.827 21:30:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:19.827 21:30:45 -- common/autotest_common.sh@10 -- # set +x 00:16:19.827 ************************************ 00:16:19.827 START TEST nvmf_shutdown_tc1 00:16:19.827 ************************************ 00:16:19.827 21:30:45 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:16:19.827 21:30:45 -- target/shutdown.sh@74 -- # starttarget 00:16:19.827 21:30:45 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:19.827 21:30:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:19.827 21:30:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.827 21:30:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:19.827 21:30:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:19.827 21:30:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:19.827 21:30:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.827 21:30:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.827 21:30:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.827 21:30:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:19.827 21:30:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:19.827 21:30:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:19.827 21:30:45 -- common/autotest_common.sh@10 -- # set +x 00:16:21.732 21:30:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:21.732 21:30:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:21.732 21:30:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:21.732 21:30:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:21.732 21:30:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:21.732 21:30:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:21.732 21:30:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:21.732 21:30:47 -- nvmf/common.sh@295 -- # net_devs=() 00:16:21.732 21:30:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:21.732 21:30:47 -- nvmf/common.sh@296 -- # e810=() 00:16:21.732 21:30:47 -- nvmf/common.sh@296 -- # local -ga e810 00:16:21.732 21:30:47 -- nvmf/common.sh@297 -- # x722=() 00:16:21.732 21:30:47 -- nvmf/common.sh@297 -- # local -ga x722 00:16:21.732 21:30:47 -- nvmf/common.sh@298 -- # mlx=() 00:16:21.732 21:30:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:21.732 21:30:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:21.732 21:30:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:21.732 21:30:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:21.732 21:30:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:21.732 21:30:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:21.732 21:30:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:21.732 21:30:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:21.732 21:30:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:21.732 21:30:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:21.732 21:30:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:21.732 21:30:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:21.732 21:30:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:21.732 21:30:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:21.732 21:30:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:21.732 21:30:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:21.732 21:30:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:21.732 21:30:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:21.732 21:30:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:21.732 21:30:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:21.732 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:21.732 21:30:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:21.732 21:30:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:21.732 21:30:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.732 21:30:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.732 21:30:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:21.732 21:30:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:21.732 21:30:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:21.732 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:21.732 21:30:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:21.732 21:30:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:21.732 21:30:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.732 21:30:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.732 21:30:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:21.732 21:30:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:21.732 21:30:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:21.732 21:30:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:21.732 21:30:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:21.732 21:30:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.732 21:30:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:21.732 21:30:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.732 21:30:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:21.732 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:21.732 21:30:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.732 21:30:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:21.732 21:30:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.732 21:30:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:21.732 21:30:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.732 21:30:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:21.732 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:21.732 21:30:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.732 21:30:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:21.732 21:30:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:21.732 21:30:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:21.732 21:30:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:21.732 21:30:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:21.732 21:30:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:21.732 21:30:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:21.732 21:30:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:21.732 21:30:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:21.732 21:30:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:21.732 21:30:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:21.732 21:30:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:21.732 21:30:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:21.732 21:30:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:21.732 21:30:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:21.732 21:30:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:21.732 21:30:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:21.732 21:30:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:21.732 21:30:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:21.732 21:30:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:21.732 21:30:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:21.732 21:30:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:21.732 21:30:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:21.732 21:30:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:21.732 21:30:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:21.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:21.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:16:21.732 00:16:21.732 --- 10.0.0.2 ping statistics --- 00:16:21.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.732 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:16:21.732 21:30:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:21.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:21.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:16:21.732 00:16:21.732 --- 10.0.0.1 ping statistics --- 00:16:21.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.732 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:16:21.732 21:30:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:21.732 21:30:47 -- nvmf/common.sh@411 -- # return 0 00:16:21.732 21:30:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:21.732 21:30:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:21.732 21:30:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:21.732 21:30:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:21.732 21:30:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:21.732 21:30:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:21.732 21:30:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:21.732 21:30:47 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:21.732 21:30:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:21.732 21:30:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:21.732 21:30:47 -- common/autotest_common.sh@10 -- # set +x 00:16:21.732 21:30:47 -- nvmf/common.sh@470 -- # nvmfpid=2624288 00:16:21.732 21:30:47 -- nvmf/common.sh@471 -- # waitforlisten 2624288 00:16:21.732 21:30:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:21.732 21:30:47 -- common/autotest_common.sh@817 -- # '[' -z 2624288 ']' 00:16:21.732 21:30:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.732 21:30:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:21.732 21:30:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.732 21:30:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:21.732 21:30:47 -- common/autotest_common.sh@10 -- # set +x 00:16:21.732 [2024-04-24 21:30:47.326238] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:16:21.732 [2024-04-24 21:30:47.326320] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.732 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.732 [2024-04-24 21:30:47.396391] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:21.991 [2024-04-24 21:30:47.511681] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.991 [2024-04-24 21:30:47.511737] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.991 [2024-04-24 21:30:47.511760] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.991 [2024-04-24 21:30:47.511772] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.991 [2024-04-24 21:30:47.511783] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.991 [2024-04-24 21:30:47.511853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.991 [2024-04-24 21:30:47.511928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:21.991 [2024-04-24 21:30:47.512042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:21.991 [2024-04-24 21:30:47.512044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.926 21:30:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:22.926 21:30:48 -- common/autotest_common.sh@850 -- # return 0 00:16:22.926 21:30:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:22.926 21:30:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:22.926 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:16:22.926 21:30:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.926 21:30:48 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:22.926 21:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.926 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:16:22.926 [2024-04-24 21:30:48.282295] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.926 21:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.926 21:30:48 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:22.926 21:30:48 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:22.926 21:30:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:22.926 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:16:22.926 21:30:48 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:22.926 21:30:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:22.926 21:30:48 -- target/shutdown.sh@28 -- # cat 00:16:22.926 21:30:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:22.926 21:30:48 -- target/shutdown.sh@28 -- # cat 00:16:22.926 21:30:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:22.927 21:30:48 -- target/shutdown.sh@28 -- # cat 00:16:22.927 21:30:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:22.927 21:30:48 -- target/shutdown.sh@28 -- # cat 00:16:22.927 21:30:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:22.927 21:30:48 -- target/shutdown.sh@28 -- # cat 00:16:22.927 21:30:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:22.927 21:30:48 -- target/shutdown.sh@28 -- # cat 00:16:22.927 21:30:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:22.927 21:30:48 -- target/shutdown.sh@28 -- # cat 00:16:22.927 21:30:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:22.927 21:30:48 -- target/shutdown.sh@28 -- # cat 00:16:22.927 21:30:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:22.927 21:30:48 -- target/shutdown.sh@28 -- # cat 00:16:22.927 21:30:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:22.927 21:30:48 -- target/shutdown.sh@28 -- # cat 00:16:22.927 21:30:48 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:22.927 21:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.927 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:16:22.927 Malloc1 00:16:22.927 [2024-04-24 21:30:48.371533] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.927 Malloc2 00:16:22.927 Malloc3 00:16:22.927 Malloc4 00:16:22.927 Malloc5 00:16:22.927 Malloc6 00:16:23.185 Malloc7 00:16:23.185 Malloc8 00:16:23.185 Malloc9 00:16:23.185 Malloc10 00:16:23.185 21:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:23.185 21:30:48 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:23.185 21:30:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:23.185 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:16:23.185 21:30:48 -- target/shutdown.sh@78 -- # perfpid=2624522 00:16:23.185 21:30:48 -- target/shutdown.sh@79 -- # waitforlisten 2624522 /var/tmp/bdevperf.sock 00:16:23.185 21:30:48 -- common/autotest_common.sh@817 -- # '[' -z 2624522 ']' 00:16:23.185 21:30:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:23.185 21:30:48 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:16:23.185 21:30:48 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:23.185 21:30:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:23.185 21:30:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:23.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:23.185 21:30:48 -- nvmf/common.sh@521 -- # config=() 00:16:23.185 21:30:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:23.185 21:30:48 -- nvmf/common.sh@521 -- # local subsystem config 00:16:23.185 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:16:23.185 21:30:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:23.185 21:30:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:23.185 { 00:16:23.185 "params": { 00:16:23.185 "name": "Nvme$subsystem", 00:16:23.185 "trtype": "$TEST_TRANSPORT", 00:16:23.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.185 "adrfam": "ipv4", 00:16:23.185 "trsvcid": "$NVMF_PORT", 00:16:23.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.185 "hdgst": ${hdgst:-false}, 00:16:23.185 "ddgst": ${ddgst:-false} 00:16:23.185 }, 00:16:23.185 "method": "bdev_nvme_attach_controller" 00:16:23.185 } 00:16:23.185 EOF 00:16:23.185 )") 00:16:23.185 21:30:48 -- nvmf/common.sh@543 -- # cat 00:16:23.185 21:30:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:23.185 21:30:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:23.185 { 00:16:23.185 "params": { 00:16:23.185 "name": "Nvme$subsystem", 00:16:23.185 "trtype": "$TEST_TRANSPORT", 00:16:23.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.185 "adrfam": "ipv4", 00:16:23.185 "trsvcid": "$NVMF_PORT", 00:16:23.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.185 "hdgst": ${hdgst:-false}, 00:16:23.185 "ddgst": ${ddgst:-false} 00:16:23.185 }, 00:16:23.185 "method": "bdev_nvme_attach_controller" 00:16:23.185 } 00:16:23.185 EOF 00:16:23.185 )") 00:16:23.185 21:30:48 -- nvmf/common.sh@543 -- # cat 00:16:23.185 21:30:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:23.185 21:30:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:23.185 { 00:16:23.185 "params": { 00:16:23.185 "name": "Nvme$subsystem", 00:16:23.185 "trtype": "$TEST_TRANSPORT", 00:16:23.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.185 "adrfam": "ipv4", 00:16:23.185 "trsvcid": "$NVMF_PORT", 00:16:23.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.185 "hdgst": ${hdgst:-false}, 00:16:23.185 "ddgst": ${ddgst:-false} 00:16:23.185 }, 00:16:23.185 "method": "bdev_nvme_attach_controller" 00:16:23.185 } 00:16:23.185 EOF 00:16:23.185 )") 00:16:23.185 21:30:48 -- nvmf/common.sh@543 -- # cat 00:16:23.185 21:30:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:23.185 21:30:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:23.185 { 00:16:23.185 "params": { 00:16:23.185 "name": "Nvme$subsystem", 00:16:23.185 "trtype": "$TEST_TRANSPORT", 00:16:23.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.185 "adrfam": "ipv4", 00:16:23.185 "trsvcid": "$NVMF_PORT", 00:16:23.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.185 "hdgst": ${hdgst:-false}, 00:16:23.185 "ddgst": ${ddgst:-false} 00:16:23.185 }, 00:16:23.185 "method": "bdev_nvme_attach_controller" 00:16:23.185 } 00:16:23.185 EOF 00:16:23.185 )") 00:16:23.185 21:30:48 -- nvmf/common.sh@543 -- # cat 00:16:23.185 21:30:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:23.185 21:30:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:23.185 { 00:16:23.185 "params": { 00:16:23.186 "name": "Nvme$subsystem", 00:16:23.186 "trtype": "$TEST_TRANSPORT", 00:16:23.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.186 "adrfam": "ipv4", 00:16:23.186 "trsvcid": "$NVMF_PORT", 00:16:23.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.186 "hdgst": ${hdgst:-false}, 00:16:23.186 "ddgst": ${ddgst:-false} 00:16:23.186 }, 00:16:23.186 "method": "bdev_nvme_attach_controller" 00:16:23.186 } 00:16:23.186 EOF 00:16:23.186 )") 00:16:23.186 21:30:48 -- nvmf/common.sh@543 -- # cat 00:16:23.444 21:30:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:23.444 21:30:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:23.444 { 00:16:23.444 "params": { 00:16:23.444 "name": "Nvme$subsystem", 00:16:23.444 "trtype": "$TEST_TRANSPORT", 00:16:23.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.444 "adrfam": "ipv4", 00:16:23.444 "trsvcid": "$NVMF_PORT", 00:16:23.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.444 "hdgst": ${hdgst:-false}, 00:16:23.444 "ddgst": ${ddgst:-false} 00:16:23.444 }, 00:16:23.444 "method": "bdev_nvme_attach_controller" 00:16:23.444 } 00:16:23.444 EOF 00:16:23.444 )") 00:16:23.444 21:30:48 -- nvmf/common.sh@543 -- # cat 00:16:23.444 21:30:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:23.444 21:30:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:23.444 { 00:16:23.444 "params": { 00:16:23.444 "name": "Nvme$subsystem", 00:16:23.444 "trtype": "$TEST_TRANSPORT", 00:16:23.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.444 "adrfam": "ipv4", 00:16:23.444 "trsvcid": "$NVMF_PORT", 00:16:23.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.444 "hdgst": ${hdgst:-false}, 00:16:23.444 "ddgst": ${ddgst:-false} 00:16:23.444 }, 00:16:23.444 "method": "bdev_nvme_attach_controller" 00:16:23.444 } 00:16:23.444 EOF 00:16:23.444 )") 00:16:23.444 21:30:48 -- nvmf/common.sh@543 -- # cat 00:16:23.444 21:30:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:23.444 21:30:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:23.444 { 00:16:23.444 "params": { 00:16:23.444 "name": "Nvme$subsystem", 00:16:23.444 "trtype": "$TEST_TRANSPORT", 00:16:23.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.444 "adrfam": "ipv4", 00:16:23.444 "trsvcid": "$NVMF_PORT", 00:16:23.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.444 "hdgst": ${hdgst:-false}, 00:16:23.444 "ddgst": ${ddgst:-false} 00:16:23.444 }, 00:16:23.444 "method": "bdev_nvme_attach_controller" 00:16:23.444 } 00:16:23.444 EOF 00:16:23.444 )") 00:16:23.444 21:30:48 -- nvmf/common.sh@543 -- # cat 00:16:23.444 21:30:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:23.444 21:30:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:23.444 { 00:16:23.444 "params": { 00:16:23.444 "name": "Nvme$subsystem", 00:16:23.444 "trtype": "$TEST_TRANSPORT", 00:16:23.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.444 "adrfam": "ipv4", 00:16:23.444 "trsvcid": "$NVMF_PORT", 00:16:23.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.444 "hdgst": ${hdgst:-false}, 00:16:23.444 "ddgst": ${ddgst:-false} 00:16:23.444 }, 00:16:23.444 "method": "bdev_nvme_attach_controller" 00:16:23.444 } 00:16:23.444 EOF 00:16:23.444 )") 00:16:23.444 21:30:48 -- nvmf/common.sh@543 -- # cat 00:16:23.444 21:30:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:23.444 21:30:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:23.444 { 00:16:23.444 "params": { 00:16:23.444 "name": "Nvme$subsystem", 00:16:23.444 "trtype": "$TEST_TRANSPORT", 00:16:23.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.444 "adrfam": "ipv4", 00:16:23.444 "trsvcid": "$NVMF_PORT", 00:16:23.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.444 "hdgst": ${hdgst:-false}, 00:16:23.444 "ddgst": ${ddgst:-false} 00:16:23.444 }, 00:16:23.444 "method": "bdev_nvme_attach_controller" 00:16:23.444 } 00:16:23.444 EOF 00:16:23.444 )") 00:16:23.444 21:30:48 -- nvmf/common.sh@543 -- # cat 00:16:23.444 21:30:48 -- nvmf/common.sh@545 -- # jq . 00:16:23.444 21:30:48 -- nvmf/common.sh@546 -- # IFS=, 00:16:23.444 21:30:48 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:23.444 "params": { 00:16:23.444 "name": "Nvme1", 00:16:23.444 "trtype": "tcp", 00:16:23.444 "traddr": "10.0.0.2", 00:16:23.444 "adrfam": "ipv4", 00:16:23.444 "trsvcid": "4420", 00:16:23.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:23.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:23.444 "hdgst": false, 00:16:23.444 "ddgst": false 00:16:23.444 }, 00:16:23.444 "method": "bdev_nvme_attach_controller" 00:16:23.444 },{ 00:16:23.444 "params": { 00:16:23.444 "name": "Nvme2", 00:16:23.444 "trtype": "tcp", 00:16:23.444 "traddr": "10.0.0.2", 00:16:23.444 "adrfam": "ipv4", 00:16:23.444 "trsvcid": "4420", 00:16:23.444 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:23.444 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:23.444 "hdgst": false, 00:16:23.444 "ddgst": false 00:16:23.444 }, 00:16:23.444 "method": "bdev_nvme_attach_controller" 00:16:23.444 },{ 00:16:23.444 "params": { 00:16:23.444 "name": "Nvme3", 00:16:23.444 "trtype": "tcp", 00:16:23.444 "traddr": "10.0.0.2", 00:16:23.444 "adrfam": "ipv4", 00:16:23.444 "trsvcid": "4420", 00:16:23.444 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:23.444 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:23.444 "hdgst": false, 00:16:23.444 "ddgst": false 00:16:23.444 }, 00:16:23.444 "method": "bdev_nvme_attach_controller" 00:16:23.444 },{ 00:16:23.445 "params": { 00:16:23.445 "name": "Nvme4", 00:16:23.445 "trtype": "tcp", 00:16:23.445 "traddr": "10.0.0.2", 00:16:23.445 "adrfam": "ipv4", 00:16:23.445 "trsvcid": "4420", 00:16:23.445 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:23.445 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:23.445 "hdgst": false, 00:16:23.445 "ddgst": false 00:16:23.445 }, 00:16:23.445 "method": "bdev_nvme_attach_controller" 00:16:23.445 },{ 00:16:23.445 "params": { 00:16:23.445 "name": "Nvme5", 00:16:23.445 "trtype": "tcp", 00:16:23.445 "traddr": "10.0.0.2", 00:16:23.445 "adrfam": "ipv4", 00:16:23.445 "trsvcid": "4420", 00:16:23.445 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:23.445 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:23.445 "hdgst": false, 00:16:23.445 "ddgst": false 00:16:23.445 }, 00:16:23.445 "method": "bdev_nvme_attach_controller" 00:16:23.445 },{ 00:16:23.445 "params": { 00:16:23.445 "name": "Nvme6", 00:16:23.445 "trtype": "tcp", 00:16:23.445 "traddr": "10.0.0.2", 00:16:23.445 "adrfam": "ipv4", 00:16:23.445 "trsvcid": "4420", 00:16:23.445 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:23.445 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:23.445 "hdgst": false, 00:16:23.445 "ddgst": false 00:16:23.445 }, 00:16:23.445 "method": "bdev_nvme_attach_controller" 00:16:23.445 },{ 00:16:23.445 "params": { 00:16:23.445 "name": "Nvme7", 00:16:23.445 "trtype": "tcp", 00:16:23.445 "traddr": "10.0.0.2", 00:16:23.445 "adrfam": "ipv4", 00:16:23.445 "trsvcid": "4420", 00:16:23.445 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:23.445 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:23.445 "hdgst": false, 00:16:23.445 "ddgst": false 00:16:23.445 }, 00:16:23.445 "method": "bdev_nvme_attach_controller" 00:16:23.445 },{ 00:16:23.445 "params": { 00:16:23.445 "name": "Nvme8", 00:16:23.445 "trtype": "tcp", 00:16:23.445 "traddr": "10.0.0.2", 00:16:23.445 "adrfam": "ipv4", 00:16:23.445 "trsvcid": "4420", 00:16:23.445 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:23.445 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:23.445 "hdgst": false, 00:16:23.445 "ddgst": false 00:16:23.445 }, 00:16:23.445 "method": "bdev_nvme_attach_controller" 00:16:23.445 },{ 00:16:23.445 "params": { 00:16:23.445 "name": "Nvme9", 00:16:23.445 "trtype": "tcp", 00:16:23.445 "traddr": "10.0.0.2", 00:16:23.445 "adrfam": "ipv4", 00:16:23.445 "trsvcid": "4420", 00:16:23.445 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:23.445 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:23.445 "hdgst": false, 00:16:23.445 "ddgst": false 00:16:23.445 }, 00:16:23.445 "method": "bdev_nvme_attach_controller" 00:16:23.445 },{ 00:16:23.445 "params": { 00:16:23.445 "name": "Nvme10", 00:16:23.445 "trtype": "tcp", 00:16:23.445 "traddr": "10.0.0.2", 00:16:23.445 "adrfam": "ipv4", 00:16:23.445 "trsvcid": "4420", 00:16:23.445 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:23.445 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:23.445 "hdgst": false, 00:16:23.445 "ddgst": false 00:16:23.445 }, 00:16:23.445 "method": "bdev_nvme_attach_controller" 00:16:23.445 }' 00:16:23.445 [2024-04-24 21:30:48.887815] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:16:23.445 [2024-04-24 21:30:48.887896] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:23.445 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.445 [2024-04-24 21:30:48.952151] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.445 [2024-04-24 21:30:49.059810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.343 21:30:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:25.343 21:30:50 -- common/autotest_common.sh@850 -- # return 0 00:16:25.343 21:30:50 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:25.343 21:30:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:25.343 21:30:50 -- common/autotest_common.sh@10 -- # set +x 00:16:25.343 21:30:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:25.343 21:30:50 -- target/shutdown.sh@83 -- # kill -9 2624522 00:16:25.343 21:30:50 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:16:25.343 21:30:50 -- target/shutdown.sh@87 -- # sleep 1 00:16:26.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2624522 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:16:26.275 21:30:51 -- target/shutdown.sh@88 -- # kill -0 2624288 00:16:26.275 21:30:51 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:26.275 21:30:51 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:26.275 21:30:51 -- nvmf/common.sh@521 -- # config=() 00:16:26.275 21:30:51 -- nvmf/common.sh@521 -- # local subsystem config 00:16:26.275 21:30:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:26.275 21:30:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:26.275 { 00:16:26.275 "params": { 00:16:26.275 "name": "Nvme$subsystem", 00:16:26.275 "trtype": "$TEST_TRANSPORT", 00:16:26.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:26.275 "adrfam": "ipv4", 00:16:26.275 "trsvcid": "$NVMF_PORT", 00:16:26.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:26.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:26.275 "hdgst": ${hdgst:-false}, 00:16:26.275 "ddgst": ${ddgst:-false} 00:16:26.275 }, 00:16:26.275 "method": "bdev_nvme_attach_controller" 00:16:26.275 } 00:16:26.275 EOF 00:16:26.275 )") 00:16:26.275 21:30:51 -- nvmf/common.sh@543 -- # cat 00:16:26.275 21:30:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:26.275 21:30:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:26.275 { 00:16:26.275 "params": { 00:16:26.275 "name": "Nvme$subsystem", 00:16:26.275 "trtype": "$TEST_TRANSPORT", 00:16:26.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:26.275 "adrfam": "ipv4", 00:16:26.275 "trsvcid": "$NVMF_PORT", 00:16:26.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:26.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:26.275 "hdgst": ${hdgst:-false}, 00:16:26.275 "ddgst": ${ddgst:-false} 00:16:26.275 }, 00:16:26.275 "method": "bdev_nvme_attach_controller" 00:16:26.275 } 00:16:26.275 EOF 00:16:26.275 )") 00:16:26.275 21:30:51 -- nvmf/common.sh@543 -- # cat 00:16:26.275 21:30:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:26.275 21:30:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:26.275 { 00:16:26.275 "params": { 00:16:26.275 "name": "Nvme$subsystem", 00:16:26.275 "trtype": "$TEST_TRANSPORT", 00:16:26.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:26.275 "adrfam": "ipv4", 00:16:26.275 "trsvcid": "$NVMF_PORT", 00:16:26.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:26.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:26.275 "hdgst": ${hdgst:-false}, 00:16:26.275 "ddgst": ${ddgst:-false} 00:16:26.275 }, 00:16:26.275 "method": "bdev_nvme_attach_controller" 00:16:26.275 } 00:16:26.275 EOF 00:16:26.275 )") 00:16:26.275 21:30:51 -- nvmf/common.sh@543 -- # cat 00:16:26.275 21:30:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:26.275 21:30:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:26.275 { 00:16:26.275 "params": { 00:16:26.275 "name": "Nvme$subsystem", 00:16:26.275 "trtype": "$TEST_TRANSPORT", 00:16:26.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:26.275 "adrfam": "ipv4", 00:16:26.275 "trsvcid": "$NVMF_PORT", 00:16:26.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:26.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:26.275 "hdgst": ${hdgst:-false}, 00:16:26.275 "ddgst": ${ddgst:-false} 00:16:26.275 }, 00:16:26.275 "method": "bdev_nvme_attach_controller" 00:16:26.275 } 00:16:26.275 EOF 00:16:26.275 )") 00:16:26.275 21:30:51 -- nvmf/common.sh@543 -- # cat 00:16:26.275 21:30:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:26.275 21:30:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:26.275 { 00:16:26.275 "params": { 00:16:26.275 "name": "Nvme$subsystem", 00:16:26.275 "trtype": "$TEST_TRANSPORT", 00:16:26.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:26.275 "adrfam": "ipv4", 00:16:26.275 "trsvcid": "$NVMF_PORT", 00:16:26.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:26.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:26.276 "hdgst": ${hdgst:-false}, 00:16:26.276 "ddgst": ${ddgst:-false} 00:16:26.276 }, 00:16:26.276 "method": "bdev_nvme_attach_controller" 00:16:26.276 } 00:16:26.276 EOF 00:16:26.276 )") 00:16:26.276 21:30:51 -- nvmf/common.sh@543 -- # cat 00:16:26.276 21:30:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:26.276 21:30:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:26.276 { 00:16:26.276 "params": { 00:16:26.276 "name": "Nvme$subsystem", 00:16:26.276 "trtype": "$TEST_TRANSPORT", 00:16:26.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:26.276 "adrfam": "ipv4", 00:16:26.276 "trsvcid": "$NVMF_PORT", 00:16:26.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:26.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:26.276 "hdgst": ${hdgst:-false}, 00:16:26.276 "ddgst": ${ddgst:-false} 00:16:26.276 }, 00:16:26.276 "method": "bdev_nvme_attach_controller" 00:16:26.276 } 00:16:26.276 EOF 00:16:26.276 )") 00:16:26.276 21:30:51 -- nvmf/common.sh@543 -- # cat 00:16:26.276 21:30:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:26.276 21:30:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:26.276 { 00:16:26.276 "params": { 00:16:26.276 "name": "Nvme$subsystem", 00:16:26.276 "trtype": "$TEST_TRANSPORT", 00:16:26.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:26.276 "adrfam": "ipv4", 00:16:26.276 "trsvcid": "$NVMF_PORT", 00:16:26.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:26.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:26.276 "hdgst": ${hdgst:-false}, 00:16:26.276 "ddgst": ${ddgst:-false} 00:16:26.276 }, 00:16:26.276 "method": "bdev_nvme_attach_controller" 00:16:26.276 } 00:16:26.276 EOF 00:16:26.276 )") 00:16:26.276 21:30:51 -- nvmf/common.sh@543 -- # cat 00:16:26.276 21:30:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:26.276 21:30:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:26.276 { 00:16:26.276 "params": { 00:16:26.276 "name": "Nvme$subsystem", 00:16:26.276 "trtype": "$TEST_TRANSPORT", 00:16:26.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:26.276 "adrfam": "ipv4", 00:16:26.276 "trsvcid": "$NVMF_PORT", 00:16:26.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:26.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:26.276 "hdgst": ${hdgst:-false}, 00:16:26.276 "ddgst": ${ddgst:-false} 00:16:26.276 }, 00:16:26.276 "method": "bdev_nvme_attach_controller" 00:16:26.276 } 00:16:26.276 EOF 00:16:26.276 )") 00:16:26.276 21:30:51 -- nvmf/common.sh@543 -- # cat 00:16:26.276 21:30:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:26.276 21:30:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:26.276 { 00:16:26.276 "params": { 00:16:26.276 "name": "Nvme$subsystem", 00:16:26.276 "trtype": "$TEST_TRANSPORT", 00:16:26.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:26.276 "adrfam": "ipv4", 00:16:26.276 "trsvcid": "$NVMF_PORT", 00:16:26.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:26.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:26.276 "hdgst": ${hdgst:-false}, 00:16:26.276 "ddgst": ${ddgst:-false} 00:16:26.276 }, 00:16:26.276 "method": "bdev_nvme_attach_controller" 00:16:26.276 } 00:16:26.276 EOF 00:16:26.276 )") 00:16:26.276 21:30:51 -- nvmf/common.sh@543 -- # cat 00:16:26.276 21:30:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:26.276 21:30:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:26.276 { 00:16:26.276 "params": { 00:16:26.276 "name": "Nvme$subsystem", 00:16:26.276 "trtype": "$TEST_TRANSPORT", 00:16:26.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:26.276 "adrfam": "ipv4", 00:16:26.276 "trsvcid": "$NVMF_PORT", 00:16:26.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:26.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:26.276 "hdgst": ${hdgst:-false}, 00:16:26.276 "ddgst": ${ddgst:-false} 00:16:26.276 }, 00:16:26.276 "method": "bdev_nvme_attach_controller" 00:16:26.276 } 00:16:26.276 EOF 00:16:26.276 )") 00:16:26.276 21:30:51 -- nvmf/common.sh@543 -- # cat 00:16:26.276 21:30:51 -- nvmf/common.sh@545 -- # jq . 00:16:26.276 21:30:51 -- nvmf/common.sh@546 -- # IFS=, 00:16:26.276 21:30:51 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:26.276 "params": { 00:16:26.276 "name": "Nvme1", 00:16:26.276 "trtype": "tcp", 00:16:26.276 "traddr": "10.0.0.2", 00:16:26.276 "adrfam": "ipv4", 00:16:26.276 "trsvcid": "4420", 00:16:26.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:26.276 "hdgst": false, 00:16:26.276 "ddgst": false 00:16:26.276 }, 00:16:26.276 "method": "bdev_nvme_attach_controller" 00:16:26.276 },{ 00:16:26.276 "params": { 00:16:26.276 "name": "Nvme2", 00:16:26.276 "trtype": "tcp", 00:16:26.276 "traddr": "10.0.0.2", 00:16:26.276 "adrfam": "ipv4", 00:16:26.276 "trsvcid": "4420", 00:16:26.276 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:26.276 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:26.276 "hdgst": false, 00:16:26.276 "ddgst": false 00:16:26.276 }, 00:16:26.276 "method": "bdev_nvme_attach_controller" 00:16:26.276 },{ 00:16:26.276 "params": { 00:16:26.276 "name": "Nvme3", 00:16:26.276 "trtype": "tcp", 00:16:26.276 "traddr": "10.0.0.2", 00:16:26.276 "adrfam": "ipv4", 00:16:26.276 "trsvcid": "4420", 00:16:26.276 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:26.276 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:26.276 "hdgst": false, 00:16:26.276 "ddgst": false 00:16:26.276 }, 00:16:26.276 "method": "bdev_nvme_attach_controller" 00:16:26.276 },{ 00:16:26.276 "params": { 00:16:26.276 "name": "Nvme4", 00:16:26.276 "trtype": "tcp", 00:16:26.276 "traddr": "10.0.0.2", 00:16:26.276 "adrfam": "ipv4", 00:16:26.276 "trsvcid": "4420", 00:16:26.276 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:26.276 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:26.276 "hdgst": false, 00:16:26.276 "ddgst": false 00:16:26.276 }, 00:16:26.276 "method": "bdev_nvme_attach_controller" 00:16:26.276 },{ 00:16:26.276 "params": { 00:16:26.276 "name": "Nvme5", 00:16:26.276 "trtype": "tcp", 00:16:26.276 "traddr": "10.0.0.2", 00:16:26.276 "adrfam": "ipv4", 00:16:26.276 "trsvcid": "4420", 00:16:26.276 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:26.276 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:26.276 "hdgst": false, 00:16:26.276 "ddgst": false 00:16:26.276 }, 00:16:26.276 "method": "bdev_nvme_attach_controller" 00:16:26.276 },{ 00:16:26.276 "params": { 00:16:26.276 "name": "Nvme6", 00:16:26.276 "trtype": "tcp", 00:16:26.276 "traddr": "10.0.0.2", 00:16:26.276 "adrfam": "ipv4", 00:16:26.276 "trsvcid": "4420", 00:16:26.276 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:26.276 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:26.276 "hdgst": false, 00:16:26.276 "ddgst": false 00:16:26.276 }, 00:16:26.276 "method": "bdev_nvme_attach_controller" 00:16:26.276 },{ 00:16:26.276 "params": { 00:16:26.276 "name": "Nvme7", 00:16:26.276 "trtype": "tcp", 00:16:26.276 "traddr": "10.0.0.2", 00:16:26.276 "adrfam": "ipv4", 00:16:26.276 "trsvcid": "4420", 00:16:26.276 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:26.276 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:26.276 "hdgst": false, 00:16:26.276 "ddgst": false 00:16:26.276 }, 00:16:26.276 "method": "bdev_nvme_attach_controller" 00:16:26.276 },{ 00:16:26.276 "params": { 00:16:26.276 "name": "Nvme8", 00:16:26.276 "trtype": "tcp", 00:16:26.276 "traddr": "10.0.0.2", 00:16:26.276 "adrfam": "ipv4", 00:16:26.276 "trsvcid": "4420", 00:16:26.276 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:26.276 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:26.276 "hdgst": false, 00:16:26.276 "ddgst": false 00:16:26.276 }, 00:16:26.276 "method": "bdev_nvme_attach_controller" 00:16:26.276 },{ 00:16:26.276 "params": { 00:16:26.276 "name": "Nvme9", 00:16:26.276 "trtype": "tcp", 00:16:26.276 "traddr": "10.0.0.2", 00:16:26.276 "adrfam": "ipv4", 00:16:26.276 "trsvcid": "4420", 00:16:26.276 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:26.276 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:26.276 "hdgst": false, 00:16:26.276 "ddgst": false 00:16:26.276 }, 00:16:26.276 "method": "bdev_nvme_attach_controller" 00:16:26.276 },{ 00:16:26.276 "params": { 00:16:26.276 "name": "Nvme10", 00:16:26.276 "trtype": "tcp", 00:16:26.276 "traddr": "10.0.0.2", 00:16:26.276 "adrfam": "ipv4", 00:16:26.276 "trsvcid": "4420", 00:16:26.276 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:26.276 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:26.276 "hdgst": false, 00:16:26.276 "ddgst": false 00:16:26.276 }, 00:16:26.276 "method": "bdev_nvme_attach_controller" 00:16:26.276 }' 00:16:26.276 [2024-04-24 21:30:51.905583] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:16:26.276 [2024-04-24 21:30:51.905694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2624895 ] 00:16:26.276 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.534 [2024-04-24 21:30:51.972214] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.535 [2024-04-24 21:30:52.083597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.435 Running I/O for 1 seconds... 00:16:29.400 00:16:29.400 Latency(us) 00:16:29.400 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.400 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:29.400 Verification LBA range: start 0x0 length 0x400 00:16:29.400 Nvme1n1 : 1.15 221.97 13.87 0.00 0.00 282551.18 9369.22 262532.36 00:16:29.400 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:29.400 Verification LBA range: start 0x0 length 0x400 00:16:29.400 Nvme2n1 : 1.03 186.08 11.63 0.00 0.00 334014.39 23981.32 292047.83 00:16:29.400 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:29.400 Verification LBA range: start 0x0 length 0x400 00:16:29.400 Nvme3n1 : 1.10 236.40 14.78 0.00 0.00 257445.20 8301.23 242337.56 00:16:29.400 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:29.400 Verification LBA range: start 0x0 length 0x400 00:16:29.400 Nvme4n1 : 1.16 165.10 10.32 0.00 0.00 365533.36 23398.78 338651.21 00:16:29.400 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:29.400 Verification LBA range: start 0x0 length 0x400 00:16:29.400 Nvme5n1 : 1.17 274.44 17.15 0.00 0.00 216116.49 21748.24 268746.15 00:16:29.400 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:29.400 Verification LBA range: start 0x0 length 0x400 00:16:29.400 Nvme6n1 : 1.11 230.59 14.41 0.00 0.00 251888.83 22524.97 217482.43 00:16:29.400 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:29.400 Verification LBA range: start 0x0 length 0x400 00:16:29.400 Nvme7n1 : 1.14 228.18 14.26 0.00 0.00 245851.12 11942.12 248551.35 00:16:29.400 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:29.400 Verification LBA range: start 0x0 length 0x400 00:16:29.400 Nvme8n1 : 1.17 163.64 10.23 0.00 0.00 345265.49 42525.58 329330.54 00:16:29.400 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:29.400 Verification LBA range: start 0x0 length 0x400 00:16:29.400 Nvme9n1 : 1.18 275.77 17.24 0.00 0.00 200958.12 2063.17 256318.58 00:16:29.400 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:29.400 Verification LBA range: start 0x0 length 0x400 00:16:29.400 Nvme10n1 : 1.19 274.00 17.13 0.00 0.00 199422.59 2063.17 264085.81 00:16:29.400 =================================================================================================================== 00:16:29.400 Total : 2256.18 141.01 0.00 0.00 258944.47 2063.17 338651.21 00:16:29.668 21:30:55 -- target/shutdown.sh@94 -- # stoptarget 00:16:29.668 21:30:55 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:29.668 21:30:55 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:29.668 21:30:55 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:29.668 21:30:55 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:29.668 21:30:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:29.668 21:30:55 -- nvmf/common.sh@117 -- # sync 00:16:29.668 21:30:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:29.668 21:30:55 -- nvmf/common.sh@120 -- # set +e 00:16:29.668 21:30:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:29.668 21:30:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:29.668 rmmod nvme_tcp 00:16:29.668 rmmod nvme_fabrics 00:16:29.668 rmmod nvme_keyring 00:16:29.668 21:30:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:29.668 21:30:55 -- nvmf/common.sh@124 -- # set -e 00:16:29.668 21:30:55 -- nvmf/common.sh@125 -- # return 0 00:16:29.668 21:30:55 -- nvmf/common.sh@478 -- # '[' -n 2624288 ']' 00:16:29.668 21:30:55 -- nvmf/common.sh@479 -- # killprocess 2624288 00:16:29.668 21:30:55 -- common/autotest_common.sh@936 -- # '[' -z 2624288 ']' 00:16:29.668 21:30:55 -- common/autotest_common.sh@940 -- # kill -0 2624288 00:16:29.668 21:30:55 -- common/autotest_common.sh@941 -- # uname 00:16:29.668 21:30:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:29.668 21:30:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2624288 00:16:29.668 21:30:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:29.668 21:30:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:29.668 21:30:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2624288' 00:16:29.668 killing process with pid 2624288 00:16:29.668 21:30:55 -- common/autotest_common.sh@955 -- # kill 2624288 00:16:29.668 21:30:55 -- common/autotest_common.sh@960 -- # wait 2624288 00:16:30.235 21:30:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:30.235 21:30:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:30.235 21:30:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:30.235 21:30:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:30.235 21:30:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:30.235 21:30:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.235 21:30:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.235 21:30:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.138 21:30:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:32.138 00:16:32.138 real 0m12.556s 00:16:32.138 user 0m37.343s 00:16:32.138 sys 0m3.284s 00:16:32.138 21:30:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:32.138 21:30:57 -- common/autotest_common.sh@10 -- # set +x 00:16:32.138 ************************************ 00:16:32.138 END TEST nvmf_shutdown_tc1 00:16:32.138 ************************************ 00:16:32.397 21:30:57 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:16:32.397 21:30:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:32.397 21:30:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:32.397 21:30:57 -- common/autotest_common.sh@10 -- # set +x 00:16:32.397 ************************************ 00:16:32.397 START TEST nvmf_shutdown_tc2 00:16:32.397 ************************************ 00:16:32.397 21:30:57 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:16:32.397 21:30:57 -- target/shutdown.sh@99 -- # starttarget 00:16:32.397 21:30:57 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:32.397 21:30:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:32.397 21:30:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:32.397 21:30:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:32.397 21:30:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:32.397 21:30:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:32.397 21:30:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.397 21:30:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.397 21:30:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.397 21:30:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:32.397 21:30:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:32.397 21:30:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:32.397 21:30:57 -- common/autotest_common.sh@10 -- # set +x 00:16:32.397 21:30:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:32.397 21:30:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:32.397 21:30:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:32.397 21:30:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:32.397 21:30:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:32.397 21:30:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:32.397 21:30:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:32.397 21:30:57 -- nvmf/common.sh@295 -- # net_devs=() 00:16:32.397 21:30:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:32.397 21:30:57 -- nvmf/common.sh@296 -- # e810=() 00:16:32.397 21:30:57 -- nvmf/common.sh@296 -- # local -ga e810 00:16:32.397 21:30:57 -- nvmf/common.sh@297 -- # x722=() 00:16:32.397 21:30:57 -- nvmf/common.sh@297 -- # local -ga x722 00:16:32.397 21:30:57 -- nvmf/common.sh@298 -- # mlx=() 00:16:32.397 21:30:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:32.397 21:30:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:32.397 21:30:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:32.397 21:30:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:32.397 21:30:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:32.398 21:30:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:32.398 21:30:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:32.398 21:30:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:32.398 21:30:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:32.398 21:30:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:32.398 21:30:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:32.398 21:30:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:32.398 21:30:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:32.398 21:30:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:32.398 21:30:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:32.398 21:30:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:32.398 21:30:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:32.398 21:30:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:32.398 21:30:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:32.398 21:30:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:32.398 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:32.398 21:30:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:32.398 21:30:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:32.398 21:30:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:32.398 21:30:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:32.398 21:30:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:32.398 21:30:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:32.398 21:30:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:32.398 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:32.398 21:30:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:32.398 21:30:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:32.398 21:30:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:32.398 21:30:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:32.398 21:30:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:32.398 21:30:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:32.398 21:30:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:32.398 21:30:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:32.398 21:30:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:32.398 21:30:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:32.398 21:30:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:32.398 21:30:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:32.398 21:30:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:32.398 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:32.398 21:30:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:32.398 21:30:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:32.398 21:30:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:32.398 21:30:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:32.398 21:30:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:32.398 21:30:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:32.398 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:32.398 21:30:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:32.398 21:30:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:32.398 21:30:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:32.398 21:30:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:32.398 21:30:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:32.398 21:30:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:32.398 21:30:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:32.398 21:30:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:32.398 21:30:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:32.398 21:30:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:32.398 21:30:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:32.398 21:30:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:32.398 21:30:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:32.398 21:30:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:32.398 21:30:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:32.398 21:30:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:32.398 21:30:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:32.398 21:30:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:32.398 21:30:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:32.398 21:30:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:32.398 21:30:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:32.398 21:30:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:32.398 21:30:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:32.398 21:30:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:32.398 21:30:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:32.398 21:30:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:32.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:32.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:16:32.398 00:16:32.398 --- 10.0.0.2 ping statistics --- 00:16:32.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.398 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:16:32.398 21:30:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:32.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:32.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:16:32.398 00:16:32.398 --- 10.0.0.1 ping statistics --- 00:16:32.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.398 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:16:32.398 21:30:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:32.398 21:30:58 -- nvmf/common.sh@411 -- # return 0 00:16:32.398 21:30:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:32.398 21:30:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:32.398 21:30:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:32.398 21:30:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:32.398 21:30:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:32.398 21:30:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:32.398 21:30:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:32.657 21:30:58 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:32.657 21:30:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:32.657 21:30:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:32.657 21:30:58 -- common/autotest_common.sh@10 -- # set +x 00:16:32.657 21:30:58 -- nvmf/common.sh@470 -- # nvmfpid=2625789 00:16:32.657 21:30:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:32.657 21:30:58 -- nvmf/common.sh@471 -- # waitforlisten 2625789 00:16:32.657 21:30:58 -- common/autotest_common.sh@817 -- # '[' -z 2625789 ']' 00:16:32.657 21:30:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.657 21:30:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:32.657 21:30:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.657 21:30:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:32.657 21:30:58 -- common/autotest_common.sh@10 -- # set +x 00:16:32.657 [2024-04-24 21:30:58.141082] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:16:32.657 [2024-04-24 21:30:58.141178] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.657 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.657 [2024-04-24 21:30:58.209929] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:32.657 [2024-04-24 21:30:58.324417] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:32.657 [2024-04-24 21:30:58.324490] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:32.657 [2024-04-24 21:30:58.324507] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:32.657 [2024-04-24 21:30:58.324521] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:32.657 [2024-04-24 21:30:58.324532] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:32.657 [2024-04-24 21:30:58.324621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.657 [2024-04-24 21:30:58.324755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:32.657 [2024-04-24 21:30:58.324820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:32.657 [2024-04-24 21:30:58.324823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.591 21:30:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:33.591 21:30:59 -- common/autotest_common.sh@850 -- # return 0 00:16:33.591 21:30:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:33.591 21:30:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:33.591 21:30:59 -- common/autotest_common.sh@10 -- # set +x 00:16:33.591 21:30:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.591 21:30:59 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:33.591 21:30:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:33.591 21:30:59 -- common/autotest_common.sh@10 -- # set +x 00:16:33.591 [2024-04-24 21:30:59.092380] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:33.591 21:30:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:33.591 21:30:59 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:33.591 21:30:59 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:33.591 21:30:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:33.591 21:30:59 -- common/autotest_common.sh@10 -- # set +x 00:16:33.591 21:30:59 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:33.591 21:30:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:33.591 21:30:59 -- target/shutdown.sh@28 -- # cat 00:16:33.591 21:30:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:33.591 21:30:59 -- target/shutdown.sh@28 -- # cat 00:16:33.591 21:30:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:33.591 21:30:59 -- target/shutdown.sh@28 -- # cat 00:16:33.591 21:30:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:33.591 21:30:59 -- target/shutdown.sh@28 -- # cat 00:16:33.591 21:30:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:33.591 21:30:59 -- target/shutdown.sh@28 -- # cat 00:16:33.591 21:30:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:33.591 21:30:59 -- target/shutdown.sh@28 -- # cat 00:16:33.591 21:30:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:33.591 21:30:59 -- target/shutdown.sh@28 -- # cat 00:16:33.591 21:30:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:33.591 21:30:59 -- target/shutdown.sh@28 -- # cat 00:16:33.591 21:30:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:33.591 21:30:59 -- target/shutdown.sh@28 -- # cat 00:16:33.591 21:30:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:33.591 21:30:59 -- target/shutdown.sh@28 -- # cat 00:16:33.591 21:30:59 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:33.591 21:30:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:33.591 21:30:59 -- common/autotest_common.sh@10 -- # set +x 00:16:33.591 Malloc1 00:16:33.591 [2024-04-24 21:30:59.167707] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.591 Malloc2 00:16:33.591 Malloc3 00:16:33.849 Malloc4 00:16:33.849 Malloc5 00:16:33.849 Malloc6 00:16:33.849 Malloc7 00:16:33.849 Malloc8 00:16:34.108 Malloc9 00:16:34.108 Malloc10 00:16:34.108 21:30:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.108 21:30:59 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:34.108 21:30:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:34.108 21:30:59 -- common/autotest_common.sh@10 -- # set +x 00:16:34.108 21:30:59 -- target/shutdown.sh@103 -- # perfpid=2625982 00:16:34.108 21:30:59 -- target/shutdown.sh@104 -- # waitforlisten 2625982 /var/tmp/bdevperf.sock 00:16:34.108 21:30:59 -- common/autotest_common.sh@817 -- # '[' -z 2625982 ']' 00:16:34.108 21:30:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:34.109 21:30:59 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:34.109 21:30:59 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:34.109 21:30:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:34.109 21:30:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:34.109 21:30:59 -- nvmf/common.sh@521 -- # config=() 00:16:34.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:34.109 21:30:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:34.109 21:30:59 -- nvmf/common.sh@521 -- # local subsystem config 00:16:34.109 21:30:59 -- common/autotest_common.sh@10 -- # set +x 00:16:34.109 21:30:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:34.109 21:30:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:34.109 { 00:16:34.109 "params": { 00:16:34.109 "name": "Nvme$subsystem", 00:16:34.109 "trtype": "$TEST_TRANSPORT", 00:16:34.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.109 "adrfam": "ipv4", 00:16:34.109 "trsvcid": "$NVMF_PORT", 00:16:34.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.109 "hdgst": ${hdgst:-false}, 00:16:34.109 "ddgst": ${ddgst:-false} 00:16:34.109 }, 00:16:34.109 "method": "bdev_nvme_attach_controller" 00:16:34.109 } 00:16:34.109 EOF 00:16:34.109 )") 00:16:34.109 21:30:59 -- nvmf/common.sh@543 -- # cat 00:16:34.109 21:30:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:34.109 21:30:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:34.109 { 00:16:34.109 "params": { 00:16:34.109 "name": "Nvme$subsystem", 00:16:34.109 "trtype": "$TEST_TRANSPORT", 00:16:34.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.109 "adrfam": "ipv4", 00:16:34.109 "trsvcid": "$NVMF_PORT", 00:16:34.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.109 "hdgst": ${hdgst:-false}, 00:16:34.109 "ddgst": ${ddgst:-false} 00:16:34.109 }, 00:16:34.109 "method": "bdev_nvme_attach_controller" 00:16:34.109 } 00:16:34.109 EOF 00:16:34.109 )") 00:16:34.109 21:30:59 -- nvmf/common.sh@543 -- # cat 00:16:34.109 21:30:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:34.109 21:30:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:34.109 { 00:16:34.109 "params": { 00:16:34.109 "name": "Nvme$subsystem", 00:16:34.109 "trtype": "$TEST_TRANSPORT", 00:16:34.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.109 "adrfam": "ipv4", 00:16:34.109 "trsvcid": "$NVMF_PORT", 00:16:34.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.109 "hdgst": ${hdgst:-false}, 00:16:34.109 "ddgst": ${ddgst:-false} 00:16:34.109 }, 00:16:34.109 "method": "bdev_nvme_attach_controller" 00:16:34.109 } 00:16:34.109 EOF 00:16:34.109 )") 00:16:34.109 21:30:59 -- nvmf/common.sh@543 -- # cat 00:16:34.109 21:30:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:34.109 21:30:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:34.109 { 00:16:34.109 "params": { 00:16:34.109 "name": "Nvme$subsystem", 00:16:34.109 "trtype": "$TEST_TRANSPORT", 00:16:34.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.109 "adrfam": "ipv4", 00:16:34.109 "trsvcid": "$NVMF_PORT", 00:16:34.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.109 "hdgst": ${hdgst:-false}, 00:16:34.109 "ddgst": ${ddgst:-false} 00:16:34.109 }, 00:16:34.109 "method": "bdev_nvme_attach_controller" 00:16:34.109 } 00:16:34.109 EOF 00:16:34.109 )") 00:16:34.109 21:30:59 -- nvmf/common.sh@543 -- # cat 00:16:34.109 21:30:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:34.109 21:30:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:34.109 { 00:16:34.109 "params": { 00:16:34.109 "name": "Nvme$subsystem", 00:16:34.109 "trtype": "$TEST_TRANSPORT", 00:16:34.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.109 "adrfam": "ipv4", 00:16:34.109 "trsvcid": "$NVMF_PORT", 00:16:34.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.109 "hdgst": ${hdgst:-false}, 00:16:34.109 "ddgst": ${ddgst:-false} 00:16:34.109 }, 00:16:34.109 "method": "bdev_nvme_attach_controller" 00:16:34.109 } 00:16:34.109 EOF 00:16:34.109 )") 00:16:34.109 21:30:59 -- nvmf/common.sh@543 -- # cat 00:16:34.109 21:30:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:34.109 21:30:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:34.109 { 00:16:34.109 "params": { 00:16:34.109 "name": "Nvme$subsystem", 00:16:34.109 "trtype": "$TEST_TRANSPORT", 00:16:34.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.109 "adrfam": "ipv4", 00:16:34.109 "trsvcid": "$NVMF_PORT", 00:16:34.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.109 "hdgst": ${hdgst:-false}, 00:16:34.109 "ddgst": ${ddgst:-false} 00:16:34.109 }, 00:16:34.109 "method": "bdev_nvme_attach_controller" 00:16:34.109 } 00:16:34.109 EOF 00:16:34.109 )") 00:16:34.109 21:30:59 -- nvmf/common.sh@543 -- # cat 00:16:34.109 21:30:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:34.109 21:30:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:34.109 { 00:16:34.109 "params": { 00:16:34.109 "name": "Nvme$subsystem", 00:16:34.109 "trtype": "$TEST_TRANSPORT", 00:16:34.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.109 "adrfam": "ipv4", 00:16:34.109 "trsvcid": "$NVMF_PORT", 00:16:34.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.109 "hdgst": ${hdgst:-false}, 00:16:34.109 "ddgst": ${ddgst:-false} 00:16:34.109 }, 00:16:34.109 "method": "bdev_nvme_attach_controller" 00:16:34.109 } 00:16:34.109 EOF 00:16:34.109 )") 00:16:34.109 21:30:59 -- nvmf/common.sh@543 -- # cat 00:16:34.109 21:30:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:34.109 21:30:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:34.109 { 00:16:34.109 "params": { 00:16:34.109 "name": "Nvme$subsystem", 00:16:34.109 "trtype": "$TEST_TRANSPORT", 00:16:34.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.109 "adrfam": "ipv4", 00:16:34.109 "trsvcid": "$NVMF_PORT", 00:16:34.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.109 "hdgst": ${hdgst:-false}, 00:16:34.109 "ddgst": ${ddgst:-false} 00:16:34.109 }, 00:16:34.109 "method": "bdev_nvme_attach_controller" 00:16:34.109 } 00:16:34.109 EOF 00:16:34.109 )") 00:16:34.109 21:30:59 -- nvmf/common.sh@543 -- # cat 00:16:34.109 21:30:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:34.109 21:30:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:34.109 { 00:16:34.109 "params": { 00:16:34.109 "name": "Nvme$subsystem", 00:16:34.109 "trtype": "$TEST_TRANSPORT", 00:16:34.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.109 "adrfam": "ipv4", 00:16:34.109 "trsvcid": "$NVMF_PORT", 00:16:34.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.109 "hdgst": ${hdgst:-false}, 00:16:34.109 "ddgst": ${ddgst:-false} 00:16:34.109 }, 00:16:34.109 "method": "bdev_nvme_attach_controller" 00:16:34.109 } 00:16:34.109 EOF 00:16:34.109 )") 00:16:34.109 21:30:59 -- nvmf/common.sh@543 -- # cat 00:16:34.109 21:30:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:34.109 21:30:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:34.109 { 00:16:34.109 "params": { 00:16:34.109 "name": "Nvme$subsystem", 00:16:34.109 "trtype": "$TEST_TRANSPORT", 00:16:34.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.109 "adrfam": "ipv4", 00:16:34.109 "trsvcid": "$NVMF_PORT", 00:16:34.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.109 "hdgst": ${hdgst:-false}, 00:16:34.109 "ddgst": ${ddgst:-false} 00:16:34.109 }, 00:16:34.109 "method": "bdev_nvme_attach_controller" 00:16:34.109 } 00:16:34.109 EOF 00:16:34.109 )") 00:16:34.109 21:30:59 -- nvmf/common.sh@543 -- # cat 00:16:34.109 21:30:59 -- nvmf/common.sh@545 -- # jq . 00:16:34.109 21:30:59 -- nvmf/common.sh@546 -- # IFS=, 00:16:34.109 21:30:59 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:34.109 "params": { 00:16:34.109 "name": "Nvme1", 00:16:34.109 "trtype": "tcp", 00:16:34.109 "traddr": "10.0.0.2", 00:16:34.109 "adrfam": "ipv4", 00:16:34.109 "trsvcid": "4420", 00:16:34.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:34.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:34.109 "hdgst": false, 00:16:34.109 "ddgst": false 00:16:34.109 }, 00:16:34.109 "method": "bdev_nvme_attach_controller" 00:16:34.109 },{ 00:16:34.109 "params": { 00:16:34.109 "name": "Nvme2", 00:16:34.109 "trtype": "tcp", 00:16:34.109 "traddr": "10.0.0.2", 00:16:34.109 "adrfam": "ipv4", 00:16:34.109 "trsvcid": "4420", 00:16:34.109 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:34.109 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:34.109 "hdgst": false, 00:16:34.109 "ddgst": false 00:16:34.109 }, 00:16:34.109 "method": "bdev_nvme_attach_controller" 00:16:34.109 },{ 00:16:34.109 "params": { 00:16:34.109 "name": "Nvme3", 00:16:34.109 "trtype": "tcp", 00:16:34.109 "traddr": "10.0.0.2", 00:16:34.109 "adrfam": "ipv4", 00:16:34.110 "trsvcid": "4420", 00:16:34.110 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:34.110 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:34.110 "hdgst": false, 00:16:34.110 "ddgst": false 00:16:34.110 }, 00:16:34.110 "method": "bdev_nvme_attach_controller" 00:16:34.110 },{ 00:16:34.110 "params": { 00:16:34.110 "name": "Nvme4", 00:16:34.110 "trtype": "tcp", 00:16:34.110 "traddr": "10.0.0.2", 00:16:34.110 "adrfam": "ipv4", 00:16:34.110 "trsvcid": "4420", 00:16:34.110 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:34.110 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:34.110 "hdgst": false, 00:16:34.110 "ddgst": false 00:16:34.110 }, 00:16:34.110 "method": "bdev_nvme_attach_controller" 00:16:34.110 },{ 00:16:34.110 "params": { 00:16:34.110 "name": "Nvme5", 00:16:34.110 "trtype": "tcp", 00:16:34.110 "traddr": "10.0.0.2", 00:16:34.110 "adrfam": "ipv4", 00:16:34.110 "trsvcid": "4420", 00:16:34.110 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:34.110 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:34.110 "hdgst": false, 00:16:34.110 "ddgst": false 00:16:34.110 }, 00:16:34.110 "method": "bdev_nvme_attach_controller" 00:16:34.110 },{ 00:16:34.110 "params": { 00:16:34.110 "name": "Nvme6", 00:16:34.110 "trtype": "tcp", 00:16:34.110 "traddr": "10.0.0.2", 00:16:34.110 "adrfam": "ipv4", 00:16:34.110 "trsvcid": "4420", 00:16:34.110 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:34.110 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:34.110 "hdgst": false, 00:16:34.110 "ddgst": false 00:16:34.110 }, 00:16:34.110 "method": "bdev_nvme_attach_controller" 00:16:34.110 },{ 00:16:34.110 "params": { 00:16:34.110 "name": "Nvme7", 00:16:34.110 "trtype": "tcp", 00:16:34.110 "traddr": "10.0.0.2", 00:16:34.110 "adrfam": "ipv4", 00:16:34.110 "trsvcid": "4420", 00:16:34.110 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:34.110 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:34.110 "hdgst": false, 00:16:34.110 "ddgst": false 00:16:34.110 }, 00:16:34.110 "method": "bdev_nvme_attach_controller" 00:16:34.110 },{ 00:16:34.110 "params": { 00:16:34.110 "name": "Nvme8", 00:16:34.110 "trtype": "tcp", 00:16:34.110 "traddr": "10.0.0.2", 00:16:34.110 "adrfam": "ipv4", 00:16:34.110 "trsvcid": "4420", 00:16:34.110 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:34.110 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:34.110 "hdgst": false, 00:16:34.110 "ddgst": false 00:16:34.110 }, 00:16:34.110 "method": "bdev_nvme_attach_controller" 00:16:34.110 },{ 00:16:34.110 "params": { 00:16:34.110 "name": "Nvme9", 00:16:34.110 "trtype": "tcp", 00:16:34.110 "traddr": "10.0.0.2", 00:16:34.110 "adrfam": "ipv4", 00:16:34.110 "trsvcid": "4420", 00:16:34.110 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:34.110 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:34.110 "hdgst": false, 00:16:34.110 "ddgst": false 00:16:34.110 }, 00:16:34.110 "method": "bdev_nvme_attach_controller" 00:16:34.110 },{ 00:16:34.110 "params": { 00:16:34.110 "name": "Nvme10", 00:16:34.110 "trtype": "tcp", 00:16:34.110 "traddr": "10.0.0.2", 00:16:34.110 "adrfam": "ipv4", 00:16:34.110 "trsvcid": "4420", 00:16:34.110 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:34.110 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:34.110 "hdgst": false, 00:16:34.110 "ddgst": false 00:16:34.110 }, 00:16:34.110 "method": "bdev_nvme_attach_controller" 00:16:34.110 }' 00:16:34.110 [2024-04-24 21:30:59.683108] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:16:34.110 [2024-04-24 21:30:59.683196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2625982 ] 00:16:34.110 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.110 [2024-04-24 21:30:59.747523] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.368 [2024-04-24 21:30:59.856423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.270 Running I/O for 10 seconds... 00:16:36.838 21:31:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:36.838 21:31:02 -- common/autotest_common.sh@850 -- # return 0 00:16:36.838 21:31:02 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:36.838 21:31:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:36.838 21:31:02 -- common/autotest_common.sh@10 -- # set +x 00:16:36.838 21:31:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:36.838 21:31:02 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:16:36.838 21:31:02 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:36.838 21:31:02 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:16:36.838 21:31:02 -- target/shutdown.sh@57 -- # local ret=1 00:16:36.838 21:31:02 -- target/shutdown.sh@58 -- # local i 00:16:36.838 21:31:02 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:16:36.838 21:31:02 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:36.838 21:31:02 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:36.838 21:31:02 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:36.838 21:31:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:36.838 21:31:02 -- common/autotest_common.sh@10 -- # set +x 00:16:36.838 21:31:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:36.838 21:31:02 -- target/shutdown.sh@60 -- # read_io_count=131 00:16:36.838 21:31:02 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:16:36.838 21:31:02 -- target/shutdown.sh@64 -- # ret=0 00:16:36.838 21:31:02 -- target/shutdown.sh@65 -- # break 00:16:36.838 21:31:02 -- target/shutdown.sh@69 -- # return 0 00:16:36.838 21:31:02 -- target/shutdown.sh@110 -- # killprocess 2625982 00:16:36.838 21:31:02 -- common/autotest_common.sh@936 -- # '[' -z 2625982 ']' 00:16:36.838 21:31:02 -- common/autotest_common.sh@940 -- # kill -0 2625982 00:16:36.838 21:31:02 -- common/autotest_common.sh@941 -- # uname 00:16:36.838 21:31:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:36.838 21:31:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2625982 00:16:36.838 21:31:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:36.838 21:31:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:36.838 21:31:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2625982' 00:16:36.838 killing process with pid 2625982 00:16:36.838 21:31:02 -- common/autotest_common.sh@955 -- # kill 2625982 00:16:36.838 21:31:02 -- common/autotest_common.sh@960 -- # wait 2625982 00:16:37.097 Received shutdown signal, test time was about 0.827905 seconds 00:16:37.097 00:16:37.097 Latency(us) 00:16:37.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.097 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:37.097 Verification LBA range: start 0x0 length 0x400 00:16:37.097 Nvme1n1 : 0.79 242.58 15.16 0.00 0.00 258582.00 40195.41 215928.98 00:16:37.097 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:37.097 Verification LBA range: start 0x0 length 0x400 00:16:37.097 Nvme2n1 : 0.82 235.13 14.70 0.00 0.00 262249.69 19418.07 229910.00 00:16:37.097 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:37.097 Verification LBA range: start 0x0 length 0x400 00:16:37.097 Nvme3n1 : 0.81 238.43 14.90 0.00 0.00 252601.84 42331.40 242337.56 00:16:37.097 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:37.097 Verification LBA range: start 0x0 length 0x400 00:16:37.097 Nvme4n1 : 0.80 239.75 14.98 0.00 0.00 245179.48 26602.76 256318.58 00:16:37.097 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:37.097 Verification LBA range: start 0x0 length 0x400 00:16:37.097 Nvme5n1 : 0.81 237.29 14.83 0.00 0.00 241843.52 20097.71 259425.47 00:16:37.097 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:37.097 Verification LBA range: start 0x0 length 0x400 00:16:37.097 Nvme6n1 : 0.78 164.48 10.28 0.00 0.00 338513.92 22719.15 299815.06 00:16:37.097 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:37.097 Verification LBA range: start 0x0 length 0x400 00:16:37.097 Nvme7n1 : 0.81 236.06 14.75 0.00 0.00 231204.28 22233.69 260978.92 00:16:37.097 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:37.097 Verification LBA range: start 0x0 length 0x400 00:16:37.097 Nvme8n1 : 0.82 233.47 14.59 0.00 0.00 228535.81 21359.88 251658.24 00:16:37.097 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:37.097 Verification LBA range: start 0x0 length 0x400 00:16:37.097 Nvme9n1 : 0.83 232.13 14.51 0.00 0.00 224201.64 22524.97 276513.37 00:16:37.097 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:37.097 Verification LBA range: start 0x0 length 0x400 00:16:37.097 Nvme10n1 : 0.79 162.42 10.15 0.00 0.00 308069.26 23010.42 316902.97 00:16:37.097 =================================================================================================================== 00:16:37.097 Total : 2221.74 138.86 0.00 0.00 254512.90 19418.07 316902.97 00:16:37.356 21:31:02 -- target/shutdown.sh@113 -- # sleep 1 00:16:38.289 21:31:03 -- target/shutdown.sh@114 -- # kill -0 2625789 00:16:38.289 21:31:03 -- target/shutdown.sh@116 -- # stoptarget 00:16:38.289 21:31:03 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:38.289 21:31:03 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:38.289 21:31:03 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:38.289 21:31:03 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:38.289 21:31:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:38.289 21:31:03 -- nvmf/common.sh@117 -- # sync 00:16:38.289 21:31:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:38.289 21:31:03 -- nvmf/common.sh@120 -- # set +e 00:16:38.289 21:31:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:38.289 21:31:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:38.289 rmmod nvme_tcp 00:16:38.289 rmmod nvme_fabrics 00:16:38.289 rmmod nvme_keyring 00:16:38.289 21:31:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:38.289 21:31:03 -- nvmf/common.sh@124 -- # set -e 00:16:38.289 21:31:03 -- nvmf/common.sh@125 -- # return 0 00:16:38.289 21:31:03 -- nvmf/common.sh@478 -- # '[' -n 2625789 ']' 00:16:38.289 21:31:03 -- nvmf/common.sh@479 -- # killprocess 2625789 00:16:38.289 21:31:03 -- common/autotest_common.sh@936 -- # '[' -z 2625789 ']' 00:16:38.289 21:31:03 -- common/autotest_common.sh@940 -- # kill -0 2625789 00:16:38.289 21:31:03 -- common/autotest_common.sh@941 -- # uname 00:16:38.289 21:31:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:38.289 21:31:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2625789 00:16:38.289 21:31:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:38.289 21:31:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:38.289 21:31:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2625789' 00:16:38.289 killing process with pid 2625789 00:16:38.289 21:31:03 -- common/autotest_common.sh@955 -- # kill 2625789 00:16:38.289 21:31:03 -- common/autotest_common.sh@960 -- # wait 2625789 00:16:38.854 21:31:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:38.854 21:31:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:38.854 21:31:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:38.854 21:31:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:38.854 21:31:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:38.854 21:31:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.854 21:31:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.854 21:31:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.385 21:31:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:41.385 00:16:41.385 real 0m8.602s 00:16:41.385 user 0m26.840s 00:16:41.385 sys 0m1.659s 00:16:41.385 21:31:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:41.385 21:31:06 -- common/autotest_common.sh@10 -- # set +x 00:16:41.385 ************************************ 00:16:41.385 END TEST nvmf_shutdown_tc2 00:16:41.385 ************************************ 00:16:41.385 21:31:06 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:16:41.385 21:31:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:41.385 21:31:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:41.385 21:31:06 -- common/autotest_common.sh@10 -- # set +x 00:16:41.385 ************************************ 00:16:41.385 START TEST nvmf_shutdown_tc3 00:16:41.385 ************************************ 00:16:41.385 21:31:06 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:16:41.385 21:31:06 -- target/shutdown.sh@121 -- # starttarget 00:16:41.385 21:31:06 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:41.385 21:31:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:41.385 21:31:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.385 21:31:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:41.385 21:31:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:41.385 21:31:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:41.385 21:31:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.385 21:31:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.385 21:31:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.385 21:31:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:41.385 21:31:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:41.385 21:31:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:41.385 21:31:06 -- common/autotest_common.sh@10 -- # set +x 00:16:41.385 21:31:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:41.385 21:31:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:41.385 21:31:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:41.385 21:31:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:41.385 21:31:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:41.385 21:31:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:41.385 21:31:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:41.385 21:31:06 -- nvmf/common.sh@295 -- # net_devs=() 00:16:41.385 21:31:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:41.385 21:31:06 -- nvmf/common.sh@296 -- # e810=() 00:16:41.385 21:31:06 -- nvmf/common.sh@296 -- # local -ga e810 00:16:41.385 21:31:06 -- nvmf/common.sh@297 -- # x722=() 00:16:41.385 21:31:06 -- nvmf/common.sh@297 -- # local -ga x722 00:16:41.385 21:31:06 -- nvmf/common.sh@298 -- # mlx=() 00:16:41.385 21:31:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:41.385 21:31:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:41.385 21:31:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:41.385 21:31:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:41.385 21:31:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:41.385 21:31:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:41.385 21:31:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:41.385 21:31:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:41.385 21:31:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:41.385 21:31:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:41.385 21:31:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:41.385 21:31:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:41.385 21:31:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:41.385 21:31:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:41.385 21:31:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:41.385 21:31:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:41.385 21:31:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:41.385 21:31:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:41.385 21:31:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.385 21:31:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:41.385 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:41.385 21:31:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:41.385 21:31:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:41.385 21:31:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.385 21:31:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.385 21:31:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:41.385 21:31:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.385 21:31:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:41.385 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:41.385 21:31:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:41.385 21:31:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:41.385 21:31:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.385 21:31:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.385 21:31:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:41.385 21:31:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:41.385 21:31:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:41.385 21:31:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:41.385 21:31:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.385 21:31:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.385 21:31:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:41.385 21:31:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.385 21:31:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:41.385 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:41.385 21:31:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.385 21:31:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.385 21:31:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.385 21:31:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:41.385 21:31:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.385 21:31:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:41.385 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:41.385 21:31:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.385 21:31:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:41.385 21:31:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:41.385 21:31:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:41.385 21:31:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:41.385 21:31:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:41.385 21:31:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.385 21:31:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:41.385 21:31:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:41.385 21:31:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:41.385 21:31:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:41.385 21:31:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:41.385 21:31:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:41.385 21:31:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:41.385 21:31:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.385 21:31:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:41.385 21:31:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:41.385 21:31:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:41.385 21:31:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:41.385 21:31:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:41.385 21:31:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:41.385 21:31:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:41.385 21:31:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:41.385 21:31:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:41.385 21:31:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:41.385 21:31:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:41.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:16:41.385 00:16:41.385 --- 10.0.0.2 ping statistics --- 00:16:41.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.385 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:16:41.385 21:31:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:41.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:16:41.385 00:16:41.385 --- 10.0.0.1 ping statistics --- 00:16:41.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.385 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:16:41.385 21:31:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.385 21:31:06 -- nvmf/common.sh@411 -- # return 0 00:16:41.385 21:31:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:41.385 21:31:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.386 21:31:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:41.386 21:31:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:41.386 21:31:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.386 21:31:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:41.386 21:31:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:41.386 21:31:06 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:41.386 21:31:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:41.386 21:31:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:41.386 21:31:06 -- common/autotest_common.sh@10 -- # set +x 00:16:41.386 21:31:06 -- nvmf/common.sh@470 -- # nvmfpid=2626911 00:16:41.386 21:31:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:41.386 21:31:06 -- nvmf/common.sh@471 -- # waitforlisten 2626911 00:16:41.386 21:31:06 -- common/autotest_common.sh@817 -- # '[' -z 2626911 ']' 00:16:41.386 21:31:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.386 21:31:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:41.386 21:31:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.386 21:31:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:41.386 21:31:06 -- common/autotest_common.sh@10 -- # set +x 00:16:41.386 [2024-04-24 21:31:06.845232] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:16:41.386 [2024-04-24 21:31:06.845317] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.386 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.386 [2024-04-24 21:31:06.909411] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:41.386 [2024-04-24 21:31:07.016848] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.386 [2024-04-24 21:31:07.016905] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.386 [2024-04-24 21:31:07.016927] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:41.386 [2024-04-24 21:31:07.016938] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:41.386 [2024-04-24 21:31:07.016948] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.386 [2024-04-24 21:31:07.017040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.386 [2024-04-24 21:31:07.017118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:41.386 [2024-04-24 21:31:07.017160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:41.386 [2024-04-24 21:31:07.017162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.644 21:31:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:41.644 21:31:07 -- common/autotest_common.sh@850 -- # return 0 00:16:41.644 21:31:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:41.644 21:31:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:41.644 21:31:07 -- common/autotest_common.sh@10 -- # set +x 00:16:41.644 21:31:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.644 21:31:07 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:41.644 21:31:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.644 21:31:07 -- common/autotest_common.sh@10 -- # set +x 00:16:41.644 [2024-04-24 21:31:07.163208] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.644 21:31:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.644 21:31:07 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:41.644 21:31:07 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:41.644 21:31:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:41.644 21:31:07 -- common/autotest_common.sh@10 -- # set +x 00:16:41.644 21:31:07 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:41.644 21:31:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:41.644 21:31:07 -- target/shutdown.sh@28 -- # cat 00:16:41.644 21:31:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:41.644 21:31:07 -- target/shutdown.sh@28 -- # cat 00:16:41.644 21:31:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:41.644 21:31:07 -- target/shutdown.sh@28 -- # cat 00:16:41.644 21:31:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:41.644 21:31:07 -- target/shutdown.sh@28 -- # cat 00:16:41.644 21:31:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:41.644 21:31:07 -- target/shutdown.sh@28 -- # cat 00:16:41.644 21:31:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:41.644 21:31:07 -- target/shutdown.sh@28 -- # cat 00:16:41.644 21:31:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:41.644 21:31:07 -- target/shutdown.sh@28 -- # cat 00:16:41.644 21:31:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:41.644 21:31:07 -- target/shutdown.sh@28 -- # cat 00:16:41.644 21:31:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:41.644 21:31:07 -- target/shutdown.sh@28 -- # cat 00:16:41.644 21:31:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:41.644 21:31:07 -- target/shutdown.sh@28 -- # cat 00:16:41.644 21:31:07 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:41.644 21:31:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.644 21:31:07 -- common/autotest_common.sh@10 -- # set +x 00:16:41.644 Malloc1 00:16:41.644 [2024-04-24 21:31:07.238267] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.644 Malloc2 00:16:41.644 Malloc3 00:16:41.903 Malloc4 00:16:41.903 Malloc5 00:16:41.903 Malloc6 00:16:41.903 Malloc7 00:16:41.903 Malloc8 00:16:42.161 Malloc9 00:16:42.161 Malloc10 00:16:42.161 21:31:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.161 21:31:07 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:42.161 21:31:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:42.161 21:31:07 -- common/autotest_common.sh@10 -- # set +x 00:16:42.161 21:31:07 -- target/shutdown.sh@125 -- # perfpid=2627087 00:16:42.161 21:31:07 -- target/shutdown.sh@126 -- # waitforlisten 2627087 /var/tmp/bdevperf.sock 00:16:42.161 21:31:07 -- common/autotest_common.sh@817 -- # '[' -z 2627087 ']' 00:16:42.161 21:31:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:42.161 21:31:07 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:42.161 21:31:07 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:42.162 21:31:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:42.162 21:31:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:42.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:42.162 21:31:07 -- nvmf/common.sh@521 -- # config=() 00:16:42.162 21:31:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:42.162 21:31:07 -- nvmf/common.sh@521 -- # local subsystem config 00:16:42.162 21:31:07 -- common/autotest_common.sh@10 -- # set +x 00:16:42.162 21:31:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:42.162 21:31:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:42.162 { 00:16:42.162 "params": { 00:16:42.162 "name": "Nvme$subsystem", 00:16:42.162 "trtype": "$TEST_TRANSPORT", 00:16:42.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.162 "adrfam": "ipv4", 00:16:42.162 "trsvcid": "$NVMF_PORT", 00:16:42.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.162 "hdgst": ${hdgst:-false}, 00:16:42.162 "ddgst": ${ddgst:-false} 00:16:42.162 }, 00:16:42.162 "method": "bdev_nvme_attach_controller" 00:16:42.162 } 00:16:42.162 EOF 00:16:42.162 )") 00:16:42.162 21:31:07 -- nvmf/common.sh@543 -- # cat 00:16:42.162 21:31:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:42.162 21:31:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:42.162 { 00:16:42.162 "params": { 00:16:42.162 "name": "Nvme$subsystem", 00:16:42.162 "trtype": "$TEST_TRANSPORT", 00:16:42.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.162 "adrfam": "ipv4", 00:16:42.162 "trsvcid": "$NVMF_PORT", 00:16:42.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.162 "hdgst": ${hdgst:-false}, 00:16:42.162 "ddgst": ${ddgst:-false} 00:16:42.162 }, 00:16:42.162 "method": "bdev_nvme_attach_controller" 00:16:42.162 } 00:16:42.162 EOF 00:16:42.162 )") 00:16:42.162 21:31:07 -- nvmf/common.sh@543 -- # cat 00:16:42.162 21:31:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:42.162 21:31:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:42.162 { 00:16:42.162 "params": { 00:16:42.162 "name": "Nvme$subsystem", 00:16:42.162 "trtype": "$TEST_TRANSPORT", 00:16:42.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.162 "adrfam": "ipv4", 00:16:42.162 "trsvcid": "$NVMF_PORT", 00:16:42.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.162 "hdgst": ${hdgst:-false}, 00:16:42.162 "ddgst": ${ddgst:-false} 00:16:42.162 }, 00:16:42.162 "method": "bdev_nvme_attach_controller" 00:16:42.162 } 00:16:42.162 EOF 00:16:42.162 )") 00:16:42.162 21:31:07 -- nvmf/common.sh@543 -- # cat 00:16:42.162 21:31:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:42.162 21:31:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:42.162 { 00:16:42.162 "params": { 00:16:42.162 "name": "Nvme$subsystem", 00:16:42.162 "trtype": "$TEST_TRANSPORT", 00:16:42.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.162 "adrfam": "ipv4", 00:16:42.162 "trsvcid": "$NVMF_PORT", 00:16:42.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.162 "hdgst": ${hdgst:-false}, 00:16:42.162 "ddgst": ${ddgst:-false} 00:16:42.162 }, 00:16:42.162 "method": "bdev_nvme_attach_controller" 00:16:42.162 } 00:16:42.162 EOF 00:16:42.162 )") 00:16:42.162 21:31:07 -- nvmf/common.sh@543 -- # cat 00:16:42.162 21:31:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:42.162 21:31:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:42.162 { 00:16:42.162 "params": { 00:16:42.162 "name": "Nvme$subsystem", 00:16:42.162 "trtype": "$TEST_TRANSPORT", 00:16:42.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.162 "adrfam": "ipv4", 00:16:42.162 "trsvcid": "$NVMF_PORT", 00:16:42.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.162 "hdgst": ${hdgst:-false}, 00:16:42.162 "ddgst": ${ddgst:-false} 00:16:42.162 }, 00:16:42.162 "method": "bdev_nvme_attach_controller" 00:16:42.162 } 00:16:42.162 EOF 00:16:42.162 )") 00:16:42.162 21:31:07 -- nvmf/common.sh@543 -- # cat 00:16:42.162 21:31:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:42.162 21:31:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:42.162 { 00:16:42.162 "params": { 00:16:42.162 "name": "Nvme$subsystem", 00:16:42.162 "trtype": "$TEST_TRANSPORT", 00:16:42.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.162 "adrfam": "ipv4", 00:16:42.162 "trsvcid": "$NVMF_PORT", 00:16:42.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.162 "hdgst": ${hdgst:-false}, 00:16:42.162 "ddgst": ${ddgst:-false} 00:16:42.162 }, 00:16:42.162 "method": "bdev_nvme_attach_controller" 00:16:42.162 } 00:16:42.162 EOF 00:16:42.162 )") 00:16:42.162 21:31:07 -- nvmf/common.sh@543 -- # cat 00:16:42.162 21:31:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:42.162 21:31:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:42.162 { 00:16:42.162 "params": { 00:16:42.162 "name": "Nvme$subsystem", 00:16:42.162 "trtype": "$TEST_TRANSPORT", 00:16:42.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.162 "adrfam": "ipv4", 00:16:42.162 "trsvcid": "$NVMF_PORT", 00:16:42.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.162 "hdgst": ${hdgst:-false}, 00:16:42.162 "ddgst": ${ddgst:-false} 00:16:42.162 }, 00:16:42.162 "method": "bdev_nvme_attach_controller" 00:16:42.162 } 00:16:42.162 EOF 00:16:42.162 )") 00:16:42.162 21:31:07 -- nvmf/common.sh@543 -- # cat 00:16:42.162 21:31:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:42.162 21:31:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:42.162 { 00:16:42.162 "params": { 00:16:42.162 "name": "Nvme$subsystem", 00:16:42.162 "trtype": "$TEST_TRANSPORT", 00:16:42.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.162 "adrfam": "ipv4", 00:16:42.162 "trsvcid": "$NVMF_PORT", 00:16:42.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.162 "hdgst": ${hdgst:-false}, 00:16:42.162 "ddgst": ${ddgst:-false} 00:16:42.162 }, 00:16:42.162 "method": "bdev_nvme_attach_controller" 00:16:42.162 } 00:16:42.162 EOF 00:16:42.162 )") 00:16:42.162 21:31:07 -- nvmf/common.sh@543 -- # cat 00:16:42.162 21:31:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:42.162 21:31:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:42.162 { 00:16:42.162 "params": { 00:16:42.162 "name": "Nvme$subsystem", 00:16:42.162 "trtype": "$TEST_TRANSPORT", 00:16:42.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.162 "adrfam": "ipv4", 00:16:42.162 "trsvcid": "$NVMF_PORT", 00:16:42.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.162 "hdgst": ${hdgst:-false}, 00:16:42.162 "ddgst": ${ddgst:-false} 00:16:42.162 }, 00:16:42.162 "method": "bdev_nvme_attach_controller" 00:16:42.162 } 00:16:42.162 EOF 00:16:42.162 )") 00:16:42.162 21:31:07 -- nvmf/common.sh@543 -- # cat 00:16:42.162 21:31:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:42.162 21:31:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:42.162 { 00:16:42.162 "params": { 00:16:42.162 "name": "Nvme$subsystem", 00:16:42.162 "trtype": "$TEST_TRANSPORT", 00:16:42.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.162 "adrfam": "ipv4", 00:16:42.162 "trsvcid": "$NVMF_PORT", 00:16:42.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.162 "hdgst": ${hdgst:-false}, 00:16:42.162 "ddgst": ${ddgst:-false} 00:16:42.162 }, 00:16:42.162 "method": "bdev_nvme_attach_controller" 00:16:42.162 } 00:16:42.162 EOF 00:16:42.162 )") 00:16:42.162 21:31:07 -- nvmf/common.sh@543 -- # cat 00:16:42.162 21:31:07 -- nvmf/common.sh@545 -- # jq . 00:16:42.162 21:31:07 -- nvmf/common.sh@546 -- # IFS=, 00:16:42.162 21:31:07 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:42.162 "params": { 00:16:42.162 "name": "Nvme1", 00:16:42.162 "trtype": "tcp", 00:16:42.162 "traddr": "10.0.0.2", 00:16:42.162 "adrfam": "ipv4", 00:16:42.162 "trsvcid": "4420", 00:16:42.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:42.162 "hdgst": false, 00:16:42.162 "ddgst": false 00:16:42.163 }, 00:16:42.163 "method": "bdev_nvme_attach_controller" 00:16:42.163 },{ 00:16:42.163 "params": { 00:16:42.163 "name": "Nvme2", 00:16:42.163 "trtype": "tcp", 00:16:42.163 "traddr": "10.0.0.2", 00:16:42.163 "adrfam": "ipv4", 00:16:42.163 "trsvcid": "4420", 00:16:42.163 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:42.163 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:42.163 "hdgst": false, 00:16:42.163 "ddgst": false 00:16:42.163 }, 00:16:42.163 "method": "bdev_nvme_attach_controller" 00:16:42.163 },{ 00:16:42.163 "params": { 00:16:42.163 "name": "Nvme3", 00:16:42.163 "trtype": "tcp", 00:16:42.163 "traddr": "10.0.0.2", 00:16:42.163 "adrfam": "ipv4", 00:16:42.163 "trsvcid": "4420", 00:16:42.163 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:42.163 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:42.163 "hdgst": false, 00:16:42.163 "ddgst": false 00:16:42.163 }, 00:16:42.163 "method": "bdev_nvme_attach_controller" 00:16:42.163 },{ 00:16:42.163 "params": { 00:16:42.163 "name": "Nvme4", 00:16:42.163 "trtype": "tcp", 00:16:42.163 "traddr": "10.0.0.2", 00:16:42.163 "adrfam": "ipv4", 00:16:42.163 "trsvcid": "4420", 00:16:42.163 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:42.163 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:42.163 "hdgst": false, 00:16:42.163 "ddgst": false 00:16:42.163 }, 00:16:42.163 "method": "bdev_nvme_attach_controller" 00:16:42.163 },{ 00:16:42.163 "params": { 00:16:42.163 "name": "Nvme5", 00:16:42.163 "trtype": "tcp", 00:16:42.163 "traddr": "10.0.0.2", 00:16:42.163 "adrfam": "ipv4", 00:16:42.163 "trsvcid": "4420", 00:16:42.163 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:42.163 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:42.163 "hdgst": false, 00:16:42.163 "ddgst": false 00:16:42.163 }, 00:16:42.163 "method": "bdev_nvme_attach_controller" 00:16:42.163 },{ 00:16:42.163 "params": { 00:16:42.163 "name": "Nvme6", 00:16:42.163 "trtype": "tcp", 00:16:42.163 "traddr": "10.0.0.2", 00:16:42.163 "adrfam": "ipv4", 00:16:42.163 "trsvcid": "4420", 00:16:42.163 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:42.163 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:42.163 "hdgst": false, 00:16:42.163 "ddgst": false 00:16:42.163 }, 00:16:42.163 "method": "bdev_nvme_attach_controller" 00:16:42.163 },{ 00:16:42.163 "params": { 00:16:42.163 "name": "Nvme7", 00:16:42.163 "trtype": "tcp", 00:16:42.163 "traddr": "10.0.0.2", 00:16:42.163 "adrfam": "ipv4", 00:16:42.163 "trsvcid": "4420", 00:16:42.163 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:42.163 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:42.163 "hdgst": false, 00:16:42.163 "ddgst": false 00:16:42.163 }, 00:16:42.163 "method": "bdev_nvme_attach_controller" 00:16:42.163 },{ 00:16:42.163 "params": { 00:16:42.163 "name": "Nvme8", 00:16:42.163 "trtype": "tcp", 00:16:42.163 "traddr": "10.0.0.2", 00:16:42.163 "adrfam": "ipv4", 00:16:42.163 "trsvcid": "4420", 00:16:42.163 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:42.163 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:42.163 "hdgst": false, 00:16:42.163 "ddgst": false 00:16:42.163 }, 00:16:42.163 "method": "bdev_nvme_attach_controller" 00:16:42.163 },{ 00:16:42.163 "params": { 00:16:42.163 "name": "Nvme9", 00:16:42.163 "trtype": "tcp", 00:16:42.163 "traddr": "10.0.0.2", 00:16:42.163 "adrfam": "ipv4", 00:16:42.163 "trsvcid": "4420", 00:16:42.163 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:42.163 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:42.163 "hdgst": false, 00:16:42.163 "ddgst": false 00:16:42.163 }, 00:16:42.163 "method": "bdev_nvme_attach_controller" 00:16:42.163 },{ 00:16:42.163 "params": { 00:16:42.163 "name": "Nvme10", 00:16:42.163 "trtype": "tcp", 00:16:42.163 "traddr": "10.0.0.2", 00:16:42.163 "adrfam": "ipv4", 00:16:42.163 "trsvcid": "4420", 00:16:42.163 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:42.163 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:42.163 "hdgst": false, 00:16:42.163 "ddgst": false 00:16:42.163 }, 00:16:42.163 "method": "bdev_nvme_attach_controller" 00:16:42.163 }' 00:16:42.163 [2024-04-24 21:31:07.752135] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:16:42.163 [2024-04-24 21:31:07.752212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2627087 ] 00:16:42.163 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.163 [2024-04-24 21:31:07.816448] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.421 [2024-04-24 21:31:07.924285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.335 Running I/O for 10 seconds... 00:16:44.913 21:31:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:44.913 21:31:10 -- common/autotest_common.sh@850 -- # return 0 00:16:44.913 21:31:10 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:44.913 21:31:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:44.913 21:31:10 -- common/autotest_common.sh@10 -- # set +x 00:16:44.913 21:31:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:44.913 21:31:10 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:44.913 21:31:10 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:16:44.913 21:31:10 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:44.913 21:31:10 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:16:44.913 21:31:10 -- target/shutdown.sh@57 -- # local ret=1 00:16:44.913 21:31:10 -- target/shutdown.sh@58 -- # local i 00:16:44.913 21:31:10 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:16:44.913 21:31:10 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:44.913 21:31:10 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:44.913 21:31:10 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:44.913 21:31:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:44.913 21:31:10 -- common/autotest_common.sh@10 -- # set +x 00:16:44.913 21:31:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:44.913 21:31:10 -- target/shutdown.sh@60 -- # read_io_count=131 00:16:44.913 21:31:10 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:16:44.913 21:31:10 -- target/shutdown.sh@64 -- # ret=0 00:16:44.913 21:31:10 -- target/shutdown.sh@65 -- # break 00:16:44.913 21:31:10 -- target/shutdown.sh@69 -- # return 0 00:16:44.913 21:31:10 -- target/shutdown.sh@135 -- # killprocess 2626911 00:16:44.913 21:31:10 -- common/autotest_common.sh@936 -- # '[' -z 2626911 ']' 00:16:44.913 21:31:10 -- common/autotest_common.sh@940 -- # kill -0 2626911 00:16:44.913 21:31:10 -- common/autotest_common.sh@941 -- # uname 00:16:44.913 21:31:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:44.913 21:31:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2626911 00:16:44.913 21:31:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:44.913 21:31:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:44.913 21:31:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2626911' 00:16:44.913 killing process with pid 2626911 00:16:44.913 21:31:10 -- common/autotest_common.sh@955 -- # kill 2626911 00:16:44.913 21:31:10 -- common/autotest_common.sh@960 -- # wait 2626911 00:16:44.913 [2024-04-24 21:31:10.513354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.913 [2024-04-24 21:31:10.513445] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.913 [2024-04-24 21:31:10.513460] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.913 [2024-04-24 21:31:10.513474] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.913 [2024-04-24 21:31:10.513487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.913 [2024-04-24 21:31:10.513499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.913 [2024-04-24 21:31:10.513512] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.913 [2024-04-24 21:31:10.513525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.913 [2024-04-24 21:31:10.513537] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.913 [2024-04-24 21:31:10.513550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.913 [2024-04-24 21:31:10.513562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.913 [2024-04-24 21:31:10.513574] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.913 [2024-04-24 21:31:10.513587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.913 [2024-04-24 21:31:10.513599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.913 [2024-04-24 21:31:10.513612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513624] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513683] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513709] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513721] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513746] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513770] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513782] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513794] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513843] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513854] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513916] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513970] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.513995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.514007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.514023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.514036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.514048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.514060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.514072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.514084] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.514096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.514108] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.514121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.514134] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.514146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.514158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.514170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.514182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.514195] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.514208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.514220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.514232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.514244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1816dd0 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516313] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516366] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516379] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516392] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516439] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516464] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516500] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516512] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516537] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516574] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516625] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516659] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516671] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516688] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.914 [2024-04-24 21:31:10.516729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.516741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.516753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.516765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.516778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.516790] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.516802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.516814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.516826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.516840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.516852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.516864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.516876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.516888] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.516899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.516911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.516924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.516946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.516957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.516969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.516981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.517000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.517012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817280 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.519346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817ba0 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.519389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817ba0 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.519404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817ba0 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.519427] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817ba0 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.519446] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817ba0 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.519460] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817ba0 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.519472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817ba0 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.519484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817ba0 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.519497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1817ba0 is same with the state(5) to be set 00:16:44.915 [2024-04-24 21:31:10.519873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.519914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.519956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.519972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.519988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.520003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.520018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.520032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.520047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.520060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.520075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.520089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.520105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.520118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.520133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.520146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.520161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.520175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.520190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.520203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.520225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.520239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.520255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.520268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.520284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.520297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.520312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.520326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.520342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.520356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.520371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.520385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.520400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.520413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.520427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.520441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.520456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.520470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.520485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.520498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.520513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.520527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.520542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.520555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.520570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.520589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.915 [2024-04-24 21:31:10.520605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.915 [2024-04-24 21:31:10.520618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.520644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.520661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.520676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.520699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.520715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.520728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.520743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.520757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.520772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.520786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.520801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.520815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.520837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.520851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.520866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.520879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.520895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.520908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.520923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.520936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.520951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.520965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.520984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.520998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.916 [2024-04-24 21:31:10.521853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.916 [2024-04-24 21:31:10.521868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.917 [2024-04-24 21:31:10.521881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.917 [2024-04-24 21:31:10.521879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.521904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.521927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.521939] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with [2024-04-24 21:31:10.521937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such devithe state(5) to be set 00:16:44.917 ce or address) on qpair id 1 00:16:44.917 [2024-04-24 21:31:10.521956] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.521968] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.521992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522043] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522107] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522200] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522276] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522290] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522352] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522377] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522401] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522413] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522425] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522438] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522450] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522490] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522519] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15554e0 was disconnected and fr[2024-04-24 21:31:10.522527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with eed. reset controller. 00:16:44.917 the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522541] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522614] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.917 [2024-04-24 21:31:10.522661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with [2024-04-24 21:31:10.522684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:16:44.917 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.917 [2024-04-24 21:31:10.522697] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.917 [2024-04-24 21:31:10.522710] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.917 [2024-04-24 21:31:10.522723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18184e0 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.917 [2024-04-24 21:31:10.522743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.917 [2024-04-24 21:31:10.522756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.917 [2024-04-24 21:31:10.522774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.917 [2024-04-24 21:31:10.522787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1653400 is same with the state(5) to be set 00:16:44.917 [2024-04-24 21:31:10.522838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.917 [2024-04-24 21:31:10.522858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.917 [2024-04-24 21:31:10.522873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.917 [2024-04-24 21:31:10.522886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.917 [2024-04-24 21:31:10.522899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.917 [2024-04-24 21:31:10.522912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.918 [2024-04-24 21:31:10.522936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.918 [2024-04-24 21:31:10.522949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.918 [2024-04-24 21:31:10.522962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1663cc0 is same with the state(5) to be set 00:16:44.918 [2024-04-24 21:31:10.523006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.918 [2024-04-24 21:31:10.523026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.918 [2024-04-24 21:31:10.523041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.918 [2024-04-24 21:31:10.523054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.918 [2024-04-24 21:31:10.523067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.918 [2024-04-24 21:31:10.523080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.918 [2024-04-24 21:31:10.523093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.918 [2024-04-24 21:31:10.523106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.918 [2024-04-24 21:31:10.523125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1086560 is same with the state(5) to be set 00:16:44.918 [2024-04-24 21:31:10.523169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.918 [2024-04-24 21:31:10.523190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.918 [2024-04-24 21:31:10.523204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.918 [2024-04-24 21:31:10.523217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.918 [2024-04-24 21:31:10.523231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.918 [2024-04-24 21:31:10.523244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.918 [2024-04-24 21:31:10.523262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.918 [2024-04-24 21:31:10.523276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.918 [2024-04-24 21:31:10.523288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1086870 is same with the state(5) to be set 00:16:44.918 [2024-04-24 21:31:10.523333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.918 [2024-04-24 21:31:10.523353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.918 [2024-04-24 21:31:10.523369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.918 [2024-04-24 21:31:10.523382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.918 [2024-04-24 21:31:10.523396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.918 [2024-04-24 21:31:10.523408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.918 [2024-04-24 21:31:10.523421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.918 [2024-04-24 21:31:10.523434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.918 [2024-04-24 21:31:10.523447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149c380 is same with the state(5) to be set 00:16:44.918 [2024-04-24 21:31:10.523498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.918 [2024-04-24 21:31:10.523518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.918 [2024-04-24 21:31:10.523533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.918 [2024-04-24 21:31:10.523547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.918 [2024-04-24 21:31:10.523560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.918 [2024-04-24 21:31:10.523573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.918 [2024-04-24 21:31:10.523587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.918 [2024-04-24 21:31:10.523600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.918 [2024-04-24 21:31:10.523612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bc8f0 is same with the state(5) to be set 00:16:44.918 [2024-04-24 21:31:10.523973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.918 [2024-04-24 21:31:10.524008] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.918 [2024-04-24 21:31:10.524022] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.918 [2024-04-24 21:31:10.524035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.918 [2024-04-24 21:31:10.524048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.918 [2024-04-24 21:31:10.524067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.918 [2024-04-24 21:31:10.524066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.918 [2024-04-24 21:31:10.524080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.918 [2024-04-24 21:31:10.524091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 21:31:10.524093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.918 the state(5) to be set 00:16:44.918 [2024-04-24 21:31:10.524108] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.918 [2024-04-24 21:31:10.524115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.918 [2024-04-24 21:31:10.524120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.918 [2024-04-24 21:31:10.524131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 21:31:10.524133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.918 the state(5) to be set 00:16:44.918 [2024-04-24 21:31:10.524147] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.918 [2024-04-24 21:31:10.524156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.918 [2024-04-24 21:31:10.524159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.918 [2024-04-24 21:31:10.524171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 21:31:10.524173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.918 the state(5) to be set 00:16:44.918 [2024-04-24 21:31:10.524186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.918 [2024-04-24 21:31:10.524189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.918 [2024-04-24 21:31:10.524199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.918 [2024-04-24 21:31:10.524204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.918 [2024-04-24 21:31:10.524212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.919 [2024-04-24 21:31:10.524224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.919 [2024-04-24 21:31:10.524237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:1[2024-04-24 21:31:10.524249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.919 the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 21:31:10.524267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.919 the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.919 [2024-04-24 21:31:10.524295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.919 [2024-04-24 21:31:10.524308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.919 [2024-04-24 21:31:10.524320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.919 [2024-04-24 21:31:10.524333] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:1[2024-04-24 21:31:10.524346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.919 the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 21:31:10.524361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.919 the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.919 [2024-04-24 21:31:10.524388] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.919 [2024-04-24 21:31:10.524401] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.919 [2024-04-24 21:31:10.524413] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.919 [2024-04-24 21:31:10.524426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.919 [2024-04-24 21:31:10.524449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with [2024-04-24 21:31:10.524450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:16:44.919 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.919 [2024-04-24 21:31:10.524467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.919 [2024-04-24 21:31:10.524481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.919 [2024-04-24 21:31:10.524494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.919 [2024-04-24 21:31:10.524506] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.919 [2024-04-24 21:31:10.524518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with [2024-04-24 21:31:10.524530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:1the state(5) to be set 00:16:44.919 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.919 [2024-04-24 21:31:10.524545] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.919 [2024-04-24 21:31:10.524558] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.919 [2024-04-24 21:31:10.524571] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.919 [2024-04-24 21:31:10.524584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.919 [2024-04-24 21:31:10.524596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.919 [2024-04-24 21:31:10.524609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.919 [2024-04-24 21:31:10.524643] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with [2024-04-24 21:31:10.524647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:16:44.919 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.919 [2024-04-24 21:31:10.524663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.919 [2024-04-24 21:31:10.524687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.919 [2024-04-24 21:31:10.524699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.919 [2024-04-24 21:31:10.524712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.919 [2024-04-24 21:31:10.524725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.919 [2024-04-24 21:31:10.524737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 21:31:10.524750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.919 the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524764] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.919 [2024-04-24 21:31:10.524776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.919 [2024-04-24 21:31:10.524789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.919 [2024-04-24 21:31:10.524802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.919 [2024-04-24 21:31:10.524809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.919 [2024-04-24 21:31:10.524814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.920 [2024-04-24 21:31:10.524825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1[2024-04-24 21:31:10.524826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 the state(5) to be set 00:16:44.920 [2024-04-24 21:31:10.524841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 21:31:10.524841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 the state(5) to be set 00:16:44.920 [2024-04-24 21:31:10.524861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818970 is same with the state(5) to be set 00:16:44.920 [2024-04-24 21:31:10.524863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.524888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.524904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.524917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.524932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.524956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.524971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.524984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.524999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525885] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with [2024-04-24 21:31:10.525896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128the state(5) to be set 00:16:44.920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525912] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.920 [2024-04-24 21:31:10.525927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with [2024-04-24 21:31:10.525928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128the state(5) to be set 00:16:44.920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.920 [2024-04-24 21:31:10.525945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with [2024-04-24 21:31:10.525946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:16:44.920 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.920 [2024-04-24 21:31:10.525964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.525968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.921 [2024-04-24 21:31:10.525979] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.525982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.921 [2024-04-24 21:31:10.525991] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.525998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.921 [2024-04-24 21:31:10.526009] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.921 [2024-04-24 21:31:10.526022] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.921 [2024-04-24 21:31:10.526036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 21:31:10.526049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.921 the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.921 [2024-04-24 21:31:10.526076] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.921 [2024-04-24 21:31:10.526088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.921 [2024-04-24 21:31:10.526101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.921 [2024-04-24 21:31:10.526114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526141] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:16:44.921 [2024-04-24 21:31:10.526159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526176] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526189] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526214] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526226] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526228] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15506f0 was disconnected and freed. reset controller. 00:16:44.921 [2024-04-24 21:31:10.526237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526262] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526359] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526395] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526459] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526483] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526558] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526582] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526595] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526606] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526618] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526639] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526653] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526697] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526709] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526721] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.526756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818e00 is same with the state(5) to be set 00:16:44.921 [2024-04-24 21:31:10.528991] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:16:44.921 [2024-04-24 21:31:10.529040] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1653400 (9): Bad file descriptor 00:16:44.921 [2024-04-24 21:31:10.530990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.921 [2024-04-24 21:31:10.531018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.921 [2024-04-24 21:31:10.531040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.921 [2024-04-24 21:31:10.531056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.921 [2024-04-24 21:31:10.531073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.921 [2024-04-24 21:31:10.531088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.921 [2024-04-24 21:31:10.531110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.921 [2024-04-24 21:31:10.531126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.921 [2024-04-24 21:31:10.531142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.921 [2024-04-24 21:31:10.531156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.921 [2024-04-24 21:31:10.531172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.921 [2024-04-24 21:31:10.531186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.531972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.531985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.532005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.532019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.532035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.532049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.532065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.532079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.532095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.532109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.532126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.532140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.532157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.532171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.532188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.532201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.922 [2024-04-24 21:31:10.532216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.922 [2024-04-24 21:31:10.532230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.532246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.532260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.532276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.532291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.532310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.532325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.532341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.532355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.532371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.532386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.532401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.532415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.532431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.532445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.532461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.532476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.532491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.532505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.532522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.532536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.532552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.532567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.532583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.532598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.532613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.532634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.532652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.532666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.549357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.549432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.549450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.549465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.549481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.549495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.549510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.549524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.549540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.549553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.549569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.549583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.549599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.549613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.549641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.549658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.549674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.549691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.549706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.549720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.549735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.549749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.549876] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1553040 was disconnected and freed. reset controller. 00:16:44.923 [2024-04-24 21:31:10.550280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.550305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.550334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.550355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.550372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.550386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.550401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.550416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.550431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.550446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.550461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.550475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.550491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.550505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.550521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.550536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.550552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.550566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.550582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.550596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.550612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.550625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.550653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.550668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.550683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.550697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.550713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.923 [2024-04-24 21:31:10.550727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.923 [2024-04-24 21:31:10.550747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.550761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.550777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.550791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.550807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.550821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.550837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.550851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.550866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.550880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.550895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.550909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.550925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.550939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.550955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.550968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.550983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.550998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.924 [2024-04-24 21:31:10.551948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.924 [2024-04-24 21:31:10.551964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.925 [2024-04-24 21:31:10.551977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.551993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.925 [2024-04-24 21:31:10.552007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.552022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.925 [2024-04-24 21:31:10.552036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.552051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.925 [2024-04-24 21:31:10.552065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.552080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.925 [2024-04-24 21:31:10.552094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.552110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.925 [2024-04-24 21:31:10.552124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.552139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.925 [2024-04-24 21:31:10.552153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.552168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.925 [2024-04-24 21:31:10.552182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.552198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.925 [2024-04-24 21:31:10.552212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.552228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.925 [2024-04-24 21:31:10.552242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.552330] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1554050 was disconnected and freed. reset controller. 00:16:44.925 [2024-04-24 21:31:10.552400] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:16:44.925 [2024-04-24 21:31:10.552456] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149c380 (9): Bad file descriptor 00:16:44.925 [2024-04-24 21:31:10.552534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.925 [2024-04-24 21:31:10.552556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.552571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.925 [2024-04-24 21:31:10.552585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.552599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.925 [2024-04-24 21:31:10.552612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.552625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.925 [2024-04-24 21:31:10.552647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.552660] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163d1f0 is same with the state(5) to be set 00:16:44.925 [2024-04-24 21:31:10.552721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.925 [2024-04-24 21:31:10.552742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.552757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.925 [2024-04-24 21:31:10.552770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.552784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.925 [2024-04-24 21:31:10.552797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.552810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.925 [2024-04-24 21:31:10.552824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.552836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1651c00 is same with the state(5) to be set 00:16:44.925 [2024-04-24 21:31:10.552884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.925 [2024-04-24 21:31:10.552905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.552920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.925 [2024-04-24 21:31:10.552935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.552949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.925 [2024-04-24 21:31:10.552967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.552982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.925 [2024-04-24 21:31:10.552995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.553008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165ca20 is same with the state(5) to be set 00:16:44.925 [2024-04-24 21:31:10.553034] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1663cc0 (9): Bad file descriptor 00:16:44.925 [2024-04-24 21:31:10.553065] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1086560 (9): Bad file descriptor 00:16:44.925 [2024-04-24 21:31:10.553095] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1086870 (9): Bad file descriptor 00:16:44.925 [2024-04-24 21:31:10.553144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.925 [2024-04-24 21:31:10.553165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.553181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.925 [2024-04-24 21:31:10.553194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.553208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.925 [2024-04-24 21:31:10.553222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.553236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.925 [2024-04-24 21:31:10.553249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.553262] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162e150 is same with the state(5) to be set 00:16:44.925 [2024-04-24 21:31:10.553290] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14bc8f0 (9): Bad file descriptor 00:16:44.925 [2024-04-24 21:31:10.553736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.925 [2024-04-24 21:31:10.553761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.553784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.925 [2024-04-24 21:31:10.553800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.553816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.925 [2024-04-24 21:31:10.553831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.553846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.925 [2024-04-24 21:31:10.553861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.553881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.925 [2024-04-24 21:31:10.553897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.553912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.925 [2024-04-24 21:31:10.553928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.553945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.925 [2024-04-24 21:31:10.553959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.553975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.925 [2024-04-24 21:31:10.553989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.554005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.925 [2024-04-24 21:31:10.554019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.925 [2024-04-24 21:31:10.554034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.554048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.554063] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551be0 is same with the state(5) to be set 00:16:44.926 [2024-04-24 21:31:10.554140] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1551be0 was disconnected and freed. reset controller. 00:16:44.926 [2024-04-24 21:31:10.556867] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:16:44.926 [2024-04-24 21:31:10.557096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.926 [2024-04-24 21:31:10.557265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.926 [2024-04-24 21:31:10.557291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1653400 with addr=10.0.0.2, port=4420 00:16:44.926 [2024-04-24 21:31:10.557307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1653400 is same with the state(5) to be set 00:16:44.926 [2024-04-24 21:31:10.558836] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:16:44.926 [2024-04-24 21:31:10.558867] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:16:44.926 [2024-04-24 21:31:10.558902] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165ca20 (9): Bad file descriptor 00:16:44.926 [2024-04-24 21:31:10.559069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.926 [2024-04-24 21:31:10.559236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.926 [2024-04-24 21:31:10.559261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x149c380 with addr=10.0.0.2, port=4420 00:16:44.926 [2024-04-24 21:31:10.559277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149c380 is same with the state(5) to be set 00:16:44.926 [2024-04-24 21:31:10.559459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.926 [2024-04-24 21:31:10.559618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.926 [2024-04-24 21:31:10.559651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14bc8f0 with addr=10.0.0.2, port=4420 00:16:44.926 [2024-04-24 21:31:10.559676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bc8f0 is same with the state(5) to be set 00:16:44.926 [2024-04-24 21:31:10.559696] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1653400 (9): Bad file descriptor 00:16:44.926 [2024-04-24 21:31:10.559767] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:44.926 [2024-04-24 21:31:10.559842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.559866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.559898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.559919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.559938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.559953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.559968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.559982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.559998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.926 [2024-04-24 21:31:10.560799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.926 [2024-04-24 21:31:10.560813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.927 [2024-04-24 21:31:10.560829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.927 [2024-04-24 21:31:10.560843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.927 [2024-04-24 21:31:10.560859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.928 [2024-04-24 21:31:10.560873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.928 [2024-04-24 21:31:10.560888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.928 [2024-04-24 21:31:10.560901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.928 [2024-04-24 21:31:10.560917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.928 [2024-04-24 21:31:10.560931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.928 [2024-04-24 21:31:10.560947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.928 [2024-04-24 21:31:10.560960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.928 [2024-04-24 21:31:10.560980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.928 [2024-04-24 21:31:10.560995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.928 [2024-04-24 21:31:10.561011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.928 [2024-04-24 21:31:10.561025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.928 [2024-04-24 21:31:10.561040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.928 [2024-04-24 21:31:10.561054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.928 [2024-04-24 21:31:10.561069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.928 [2024-04-24 21:31:10.561083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.928 [2024-04-24 21:31:10.561099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.928 [2024-04-24 21:31:10.561112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.928 [2024-04-24 21:31:10.561128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.928 [2024-04-24 21:31:10.561142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.928 [2024-04-24 21:31:10.561158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.928 [2024-04-24 21:31:10.561172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.928 [2024-04-24 21:31:10.561188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.928 [2024-04-24 21:31:10.561202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.928 [2024-04-24 21:31:10.561219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.928 [2024-04-24 21:31:10.561233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.928 [2024-04-24 21:31:10.561249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.928 [2024-04-24 21:31:10.561262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.928 [2024-04-24 21:31:10.561278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.928 [2024-04-24 21:31:10.561292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.928 [2024-04-24 21:31:10.561308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.928 [2024-04-24 21:31:10.561322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.928 [2024-04-24 21:31:10.561338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.928 [2024-04-24 21:31:10.561352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.928 [2024-04-24 21:31:10.561371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.928 [2024-04-24 21:31:10.561386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.928 [2024-04-24 21:31:10.561402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.928 [2024-04-24 21:31:10.561416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.928 [2024-04-24 21:31:10.561431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.928 [2024-04-24 21:31:10.561445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.929 [2024-04-24 21:31:10.561461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.929 [2024-04-24 21:31:10.561474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.929 [2024-04-24 21:31:10.561490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.929 [2024-04-24 21:31:10.561504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.929 [2024-04-24 21:31:10.561519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.929 [2024-04-24 21:31:10.561533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.929 [2024-04-24 21:31:10.561549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.929 [2024-04-24 21:31:10.561562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.929 [2024-04-24 21:31:10.561578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.929 [2024-04-24 21:31:10.561592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.929 [2024-04-24 21:31:10.561607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.929 [2024-04-24 21:31:10.561621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.929 [2024-04-24 21:31:10.561645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.929 [2024-04-24 21:31:10.561671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.929 [2024-04-24 21:31:10.561687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.929 [2024-04-24 21:31:10.561701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.929 [2024-04-24 21:31:10.561717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.929 [2024-04-24 21:31:10.561732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.929 [2024-04-24 21:31:10.561747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.929 [2024-04-24 21:31:10.561765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.929 [2024-04-24 21:31:10.561781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.929 [2024-04-24 21:31:10.561795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.929 [2024-04-24 21:31:10.561812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.929 [2024-04-24 21:31:10.561826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.929 [2024-04-24 21:31:10.561840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154f290 is same with the state(5) to be set 00:16:44.929 [2024-04-24 21:31:10.561921] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x154f290 was disconnected and freed. reset controller. 00:16:44.929 [2024-04-24 21:31:10.562307] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:44.929 [2024-04-24 21:31:10.562381] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:44.929 [2024-04-24 21:31:10.563092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.929 [2024-04-24 21:31:10.563272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.929 [2024-04-24 21:31:10.563297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086560 with addr=10.0.0.2, port=4420 00:16:44.929 [2024-04-24 21:31:10.563313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1086560 is same with the state(5) to be set 00:16:44.929 [2024-04-24 21:31:10.563348] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149c380 (9): Bad file descriptor 00:16:44.929 [2024-04-24 21:31:10.563371] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14bc8f0 (9): Bad file descriptor 00:16:44.929 [2024-04-24 21:31:10.563387] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:16:44.929 [2024-04-24 21:31:10.563400] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:16:44.929 [2024-04-24 21:31:10.563415] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:16:44.929 [2024-04-24 21:31:10.563464] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163d1f0 (9): Bad file descriptor 00:16:44.929 [2024-04-24 21:31:10.563503] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1651c00 (9): Bad file descriptor 00:16:44.929 [2024-04-24 21:31:10.563537] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:44.929 [2024-04-24 21:31:10.563572] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x162e150 (9): Bad file descriptor 00:16:44.929 [2024-04-24 21:31:10.563607] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:44.929 [2024-04-24 21:31:10.565165] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:44.929 [2024-04-24 21:31:10.565209] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:44.929 [2024-04-24 21:31:10.565235] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:16:44.929 [2024-04-24 21:31:10.565410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.929 [2024-04-24 21:31:10.565577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.929 [2024-04-24 21:31:10.565602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165ca20 with addr=10.0.0.2, port=4420 00:16:44.929 [2024-04-24 21:31:10.565624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165ca20 is same with the state(5) to be set 00:16:44.929 [2024-04-24 21:31:10.565654] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1086560 (9): Bad file descriptor 00:16:44.929 [2024-04-24 21:31:10.565671] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:16:44.929 [2024-04-24 21:31:10.565684] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:16:44.929 [2024-04-24 21:31:10.565697] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:16:44.929 [2024-04-24 21:31:10.565717] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:16:44.929 [2024-04-24 21:31:10.565731] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:16:44.929 [2024-04-24 21:31:10.565744] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:16:44.929 [2024-04-24 21:31:10.565805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.929 [2024-04-24 21:31:10.565827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.929 [2024-04-24 21:31:10.565849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.929 [2024-04-24 21:31:10.565864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.929 [2024-04-24 21:31:10.565881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.565895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.565911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.565924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.565940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.565954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.565969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.565982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.565998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.930 [2024-04-24 21:31:10.566733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.930 [2024-04-24 21:31:10.566748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.566762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.566778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.566792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.566807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.566820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.566836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.566854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.566870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.566884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.566900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.566914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.566930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.566943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.566958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.566972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.566988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.567001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.567016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.567030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.567046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.567060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.567076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.567089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.567104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.567118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.567134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.567147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.567162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.567176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.567191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.567205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.567229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.567244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.567259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.567274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.567289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.567303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.567318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.567332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.567347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.567361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.567376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.567390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.567406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.567420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.567435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.567449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.567465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.567478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.567494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.567507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.567523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.567536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.567552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.567565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.931 [2024-04-24 21:31:10.567580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.931 [2024-04-24 21:31:10.567598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.932 [2024-04-24 21:31:10.567614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.932 [2024-04-24 21:31:10.567633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.932 [2024-04-24 21:31:10.567651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.932 [2024-04-24 21:31:10.567671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.932 [2024-04-24 21:31:10.567686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.932 [2024-04-24 21:31:10.567700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.932 [2024-04-24 21:31:10.567716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.932 [2024-04-24 21:31:10.567730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.932 [2024-04-24 21:31:10.567744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df50 is same with the state(5) to be set 00:16:44.932 [2024-04-24 21:31:10.569038] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:44.932 [2024-04-24 21:31:10.569062] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:44.932 [2024-04-24 21:31:10.569078] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:44.932 [2024-04-24 21:31:10.569282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.932 [2024-04-24 21:31:10.569433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.932 [2024-04-24 21:31:10.569459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1663cc0 with addr=10.0.0.2, port=4420 00:16:44.932 [2024-04-24 21:31:10.569475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1663cc0 is same with the state(5) to be set 00:16:44.932 [2024-04-24 21:31:10.569494] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165ca20 (9): Bad file descriptor 00:16:44.932 [2024-04-24 21:31:10.569511] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:16:44.932 [2024-04-24 21:31:10.569524] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:16:44.932 [2024-04-24 21:31:10.569537] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:16:44.932 [2024-04-24 21:31:10.569600] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:44.932 [2024-04-24 21:31:10.569944] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:44.932 [2024-04-24 21:31:10.570119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.932 [2024-04-24 21:31:10.570284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.932 [2024-04-24 21:31:10.570309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086870 with addr=10.0.0.2, port=4420 00:16:44.932 [2024-04-24 21:31:10.570325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1086870 is same with the state(5) to be set 00:16:44.932 [2024-04-24 21:31:10.570343] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1663cc0 (9): Bad file descriptor 00:16:44.932 [2024-04-24 21:31:10.570360] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:16:44.932 [2024-04-24 21:31:10.570379] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:16:44.932 [2024-04-24 21:31:10.570393] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:16:44.932 [2024-04-24 21:31:10.570723] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:16:44.932 [2024-04-24 21:31:10.570749] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:16:44.932 [2024-04-24 21:31:10.570767] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:44.932 [2024-04-24 21:31:10.570800] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1086870 (9): Bad file descriptor 00:16:44.932 [2024-04-24 21:31:10.570821] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:44.932 [2024-04-24 21:31:10.570835] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:16:44.932 [2024-04-24 21:31:10.570848] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:44.932 [2024-04-24 21:31:10.570906] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:44.932 [2024-04-24 21:31:10.571065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.932 [2024-04-24 21:31:10.571215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.932 [2024-04-24 21:31:10.571240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14bc8f0 with addr=10.0.0.2, port=4420 00:16:44.932 [2024-04-24 21:31:10.571255] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bc8f0 is same with the state(5) to be set 00:16:44.932 [2024-04-24 21:31:10.571438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.932 [2024-04-24 21:31:10.571594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.932 [2024-04-24 21:31:10.571618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x149c380 with addr=10.0.0.2, port=4420 00:16:44.932 [2024-04-24 21:31:10.571644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149c380 is same with the state(5) to be set 00:16:44.932 [2024-04-24 21:31:10.571679] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:44.932 [2024-04-24 21:31:10.571694] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:44.932 [2024-04-24 21:31:10.571707] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:44.932 [2024-04-24 21:31:10.571759] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:44.932 [2024-04-24 21:31:10.571782] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14bc8f0 (9): Bad file descriptor 00:16:44.932 [2024-04-24 21:31:10.571803] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149c380 (9): Bad file descriptor 00:16:44.932 [2024-04-24 21:31:10.571840] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:16:44.932 [2024-04-24 21:31:10.571856] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:16:44.932 [2024-04-24 21:31:10.571869] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:16:44.932 [2024-04-24 21:31:10.571886] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:16:44.932 [2024-04-24 21:31:10.571900] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:16:44.932 [2024-04-24 21:31:10.571913] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:16:44.932 [2024-04-24 21:31:10.571956] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:44.932 [2024-04-24 21:31:10.571974] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:44.932 [2024-04-24 21:31:10.572898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.932 [2024-04-24 21:31:10.572923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.932 [2024-04-24 21:31:10.572948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.932 [2024-04-24 21:31:10.572964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.932 [2024-04-24 21:31:10.572982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.932 [2024-04-24 21:31:10.572997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.932 [2024-04-24 21:31:10.573012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.932 [2024-04-24 21:31:10.573026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.932 [2024-04-24 21:31:10.573042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.932 [2024-04-24 21:31:10.573056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.932 [2024-04-24 21:31:10.573071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.932 [2024-04-24 21:31:10.573086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.932 [2024-04-24 21:31:10.573102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.932 [2024-04-24 21:31:10.573116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.932 [2024-04-24 21:31:10.573131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.932 [2024-04-24 21:31:10.573145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.573975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.573989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.574004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.574018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.574038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.574052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.574068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.574082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.574097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.574111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.933 [2024-04-24 21:31:10.574126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.933 [2024-04-24 21:31:10.574139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.574837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.574851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14928e0 is same with the state(5) to be set 00:16:44.934 [2024-04-24 21:31:10.576128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.576152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.576172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.576187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.576203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.576216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.576232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.576246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.576263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.576277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.576292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.576305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.576321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.576334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.576349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.576363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.576379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.576393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.576409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.576422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.576442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.576457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.576472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.576486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.576502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.576516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.576532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.576546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.576561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.576575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.934 [2024-04-24 21:31:10.576590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.934 [2024-04-24 21:31:10.576603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.576619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.576641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.576658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.576673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.576688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.576702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.576717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.576731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.576746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.576760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.576775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.576789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.576804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.576821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.576837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.576851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.576866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.576880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.576895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.576909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.576923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.576937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.576953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.576966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.576982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.576996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.935 [2024-04-24 21:31:10.577797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.935 [2024-04-24 21:31:10.577811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.936 [2024-04-24 21:31:10.577826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.936 [2024-04-24 21:31:10.577839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.936 [2024-04-24 21:31:10.577855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.936 [2024-04-24 21:31:10.577869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.936 [2024-04-24 21:31:10.577884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.936 [2024-04-24 21:31:10.577897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.936 [2024-04-24 21:31:10.577912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.936 [2024-04-24 21:31:10.577926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.936 [2024-04-24 21:31:10.577943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.936 [2024-04-24 21:31:10.577961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.936 [2024-04-24 21:31:10.577977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.936 [2024-04-24 21:31:10.577991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.936 [2024-04-24 21:31:10.578006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.936 [2024-04-24 21:31:10.578021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.936 [2024-04-24 21:31:10.578036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.936 [2024-04-24 21:31:10.578050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.936 [2024-04-24 21:31:10.578064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1493d20 is same with the state(5) to be set 00:16:45.196 [2024-04-24 21:31:10.579283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.196 [2024-04-24 21:31:10.579306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.196 [2024-04-24 21:31:10.579326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.196 [2024-04-24 21:31:10.579342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.196 [2024-04-24 21:31:10.579358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.196 [2024-04-24 21:31:10.579373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.196 [2024-04-24 21:31:10.579389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.196 [2024-04-24 21:31:10.579402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.196 [2024-04-24 21:31:10.579418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.196 [2024-04-24 21:31:10.579431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.196 [2024-04-24 21:31:10.579447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.196 [2024-04-24 21:31:10.579460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.196 [2024-04-24 21:31:10.579475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.196 [2024-04-24 21:31:10.579489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.196 [2024-04-24 21:31:10.579504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.196 [2024-04-24 21:31:10.579519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.196 [2024-04-24 21:31:10.579534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.196 [2024-04-24 21:31:10.579552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.196 [2024-04-24 21:31:10.579568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.196 [2024-04-24 21:31:10.579582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.196 [2024-04-24 21:31:10.579597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.196 [2024-04-24 21:31:10.579611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.196 [2024-04-24 21:31:10.579634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.196 [2024-04-24 21:31:10.579650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.196 [2024-04-24 21:31:10.579667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.196 [2024-04-24 21:31:10.579681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.579697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.579711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.579727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.579741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.579756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.579770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.579785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.579798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.579814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.579827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.579843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.579857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.579873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.579886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.579903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.579916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.579939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.579953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.579969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.579982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.579998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.197 [2024-04-24 21:31:10.580864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.197 [2024-04-24 21:31:10.580879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.198 [2024-04-24 21:31:10.580894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.198 [2024-04-24 21:31:10.580907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.198 [2024-04-24 21:31:10.580923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.198 [2024-04-24 21:31:10.580937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.198 [2024-04-24 21:31:10.580952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.198 [2024-04-24 21:31:10.580966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.198 [2024-04-24 21:31:10.580981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.198 [2024-04-24 21:31:10.580994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.198 [2024-04-24 21:31:10.581010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.198 [2024-04-24 21:31:10.581024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.198 [2024-04-24 21:31:10.581040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.198 [2024-04-24 21:31:10.581058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.198 [2024-04-24 21:31:10.581075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.198 [2024-04-24 21:31:10.581088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.198 [2024-04-24 21:31:10.581103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.198 [2024-04-24 21:31:10.581117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.198 [2024-04-24 21:31:10.581132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.198 [2024-04-24 21:31:10.581146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.198 [2024-04-24 21:31:10.581161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.198 [2024-04-24 21:31:10.581176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.198 [2024-04-24 21:31:10.581191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.198 [2024-04-24 21:31:10.581204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.198 [2024-04-24 21:31:10.581218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1495180 is same with the state(5) to be set 00:16:45.198 [2024-04-24 21:31:10.583050] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:16:45.198 [2024-04-24 21:31:10.583085] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:16:45.198 [2024-04-24 21:31:10.583104] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:16:45.198 task offset: 18176 on job bdev=Nvme10n1 fails 00:16:45.198 00:16:45.198 Latency(us) 00:16:45.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.198 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:45.198 Job: Nvme1n1 ended in about 0.85 seconds with error 00:16:45.198 Verification LBA range: start 0x0 length 0x400 00:16:45.198 Nvme1n1 : 0.85 151.40 9.46 75.70 0.00 278514.09 23107.51 253211.69 00:16:45.198 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:45.198 Job: Nvme2n1 ended in about 0.84 seconds with error 00:16:45.198 Verification LBA range: start 0x0 length 0x400 00:16:45.198 Nvme2n1 : 0.84 152.13 9.51 76.07 0.00 270938.01 22330.79 268746.15 00:16:45.198 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:45.198 Job: Nvme3n1 ended in about 0.81 seconds with error 00:16:45.198 Verification LBA range: start 0x0 length 0x400 00:16:45.198 Nvme3n1 : 0.81 237.94 14.87 79.31 0.00 189954.42 5461.33 222142.77 00:16:45.198 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:45.198 Job: Nvme4n1 ended in about 0.84 seconds with error 00:16:45.198 Verification LBA range: start 0x0 length 0x400 00:16:45.198 Nvme4n1 : 0.84 153.25 9.58 11.97 0.00 346590.81 41360.50 282727.16 00:16:45.198 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:45.198 Job: Nvme5n1 ended in about 0.83 seconds with error 00:16:45.198 Verification LBA range: start 0x0 length 0x400 00:16:45.198 Nvme5n1 : 0.83 153.87 9.62 76.93 0.00 249435.02 23495.87 259425.47 00:16:45.198 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:45.198 Job: Nvme6n1 ended in about 0.85 seconds with error 00:16:45.198 Verification LBA range: start 0x0 length 0x400 00:16:45.198 Nvme6n1 : 0.85 150.14 9.38 75.07 0.00 250205.42 20971.52 256318.58 00:16:45.198 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:45.198 Job: Nvme7n1 ended in about 0.86 seconds with error 00:16:45.198 Verification LBA range: start 0x0 length 0x400 00:16:45.198 Nvme7n1 : 0.86 149.58 9.35 74.79 0.00 245246.23 22233.69 253211.69 00:16:45.198 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:45.198 Job: Nvme8n1 ended in about 0.86 seconds with error 00:16:45.198 Verification LBA range: start 0x0 length 0x400 00:16:45.198 Nvme8n1 : 0.86 157.18 9.82 74.52 0.00 231892.79 55924.05 270299.59 00:16:45.198 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:45.198 Job: Nvme9n1 ended in about 0.83 seconds with error 00:16:45.198 Verification LBA range: start 0x0 length 0x400 00:16:45.198 Nvme9n1 : 0.83 153.63 9.60 76.82 0.00 225950.28 22427.88 296708.17 00:16:45.198 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:45.198 Job: Nvme10n1 ended in about 0.81 seconds with error 00:16:45.198 Verification LBA range: start 0x0 length 0x400 00:16:45.198 Nvme10n1 : 0.81 158.88 9.93 79.44 0.00 210834.01 8349.77 256318.58 00:16:45.198 =================================================================================================================== 00:16:45.198 Total : 1618.01 101.13 700.62 0.00 245214.33 5461.33 296708.17 00:16:45.198 [2024-04-24 21:31:10.611709] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:45.198 [2024-04-24 21:31:10.611789] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:16:45.198 [2024-04-24 21:31:10.612714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:45.198 [2024-04-24 21:31:10.612929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:45.198 [2024-04-24 21:31:10.612957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1653400 with addr=10.0.0.2, port=4420 00:16:45.198 [2024-04-24 21:31:10.612977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1653400 is same with the state(5) to be set 00:16:45.198 [2024-04-24 21:31:10.613133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:45.198 [2024-04-24 21:31:10.613294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:45.198 [2024-04-24 21:31:10.613320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x163d1f0 with addr=10.0.0.2, port=4420 00:16:45.198 [2024-04-24 21:31:10.613337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163d1f0 is same with the state(5) to be set 00:16:45.198 [2024-04-24 21:31:10.613486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:45.198 [2024-04-24 21:31:10.613648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:45.198 [2024-04-24 21:31:10.613675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x162e150 with addr=10.0.0.2, port=4420 00:16:45.198 [2024-04-24 21:31:10.613691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162e150 is same with the state(5) to be set 00:16:45.198 [2024-04-24 21:31:10.613838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:45.198 [2024-04-24 21:31:10.614006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:45.198 [2024-04-24 21:31:10.614032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1651c00 with addr=10.0.0.2, port=4420 00:16:45.198 [2024-04-24 21:31:10.614049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1651c00 is same with the state(5) to be set 00:16:45.198 [2024-04-24 21:31:10.614080] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:45.198 [2024-04-24 21:31:10.614112] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:45.198 [2024-04-24 21:31:10.614133] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:45.198 [2024-04-24 21:31:10.614151] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:45.198 [2024-04-24 21:31:10.614169] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:45.198 [2024-04-24 21:31:10.614186] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:45.198 [2024-04-24 21:31:10.615042] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:16:45.198 [2024-04-24 21:31:10.615072] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:16:45.198 [2024-04-24 21:31:10.615090] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:16:45.198 [2024-04-24 21:31:10.615107] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:45.198 [2024-04-24 21:31:10.615124] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:16:45.198 [2024-04-24 21:31:10.615141] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:16:45.198 [2024-04-24 21:31:10.615219] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1653400 (9): Bad file descriptor 00:16:45.198 [2024-04-24 21:31:10.615249] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163d1f0 (9): Bad file descriptor 00:16:45.198 [2024-04-24 21:31:10.615268] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x162e150 (9): Bad file descriptor 00:16:45.199 [2024-04-24 21:31:10.615286] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1651c00 (9): Bad file descriptor 00:16:45.199 [2024-04-24 21:31:10.615495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:45.199 [2024-04-24 21:31:10.615671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:45.199 [2024-04-24 21:31:10.615697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165ca20 with addr=10.0.0.2, port=4420 00:16:45.199 [2024-04-24 21:31:10.615713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165ca20 is same with the state(5) to be set 00:16:45.199 [2024-04-24 21:31:10.615871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:45.199 [2024-04-24 21:31:10.616203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:45.199 [2024-04-24 21:31:10.616229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1663cc0 with addr=10.0.0.2, port=4420 00:16:45.199 [2024-04-24 21:31:10.616245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1663cc0 is same with the state(5) to be set 00:16:45.199 [2024-04-24 21:31:10.616398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:45.199 [2024-04-24 21:31:10.616575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:45.199 [2024-04-24 21:31:10.616601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086560 with addr=10.0.0.2, port=4420 00:16:45.199 [2024-04-24 21:31:10.616617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1086560 is same with the state(5) to be set 00:16:45.199 [2024-04-24 21:31:10.616801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:45.199 [2024-04-24 21:31:10.616963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:45.199 [2024-04-24 21:31:10.616990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086870 with addr=10.0.0.2, port=4420 00:16:45.199 [2024-04-24 21:31:10.617006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1086870 is same with the state(5) to be set 00:16:45.199 [2024-04-24 21:31:10.617161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:45.199 [2024-04-24 21:31:10.617307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:45.199 [2024-04-24 21:31:10.617332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x149c380 with addr=10.0.0.2, port=4420 00:16:45.199 [2024-04-24 21:31:10.617348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149c380 is same with the state(5) to be set 00:16:45.199 [2024-04-24 21:31:10.617497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:45.199 [2024-04-24 21:31:10.617696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:45.199 [2024-04-24 21:31:10.617723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14bc8f0 with addr=10.0.0.2, port=4420 00:16:45.199 [2024-04-24 21:31:10.617739] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bc8f0 is same with the state(5) to be set 00:16:45.199 [2024-04-24 21:31:10.617755] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:16:45.199 [2024-04-24 21:31:10.617768] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:16:45.199 [2024-04-24 21:31:10.617784] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:16:45.199 [2024-04-24 21:31:10.617803] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:16:45.199 [2024-04-24 21:31:10.617818] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:16:45.199 [2024-04-24 21:31:10.617831] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:16:45.199 [2024-04-24 21:31:10.617847] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:16:45.199 [2024-04-24 21:31:10.617861] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:16:45.199 [2024-04-24 21:31:10.617874] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:16:45.199 [2024-04-24 21:31:10.617891] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:16:45.199 [2024-04-24 21:31:10.617905] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:16:45.199 [2024-04-24 21:31:10.617917] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:16:45.199 [2024-04-24 21:31:10.617989] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:45.199 [2024-04-24 21:31:10.618011] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:45.199 [2024-04-24 21:31:10.618023] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:45.199 [2024-04-24 21:31:10.618034] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:45.199 [2024-04-24 21:31:10.618051] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165ca20 (9): Bad file descriptor 00:16:45.199 [2024-04-24 21:31:10.618071] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1663cc0 (9): Bad file descriptor 00:16:45.199 [2024-04-24 21:31:10.618088] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1086560 (9): Bad file descriptor 00:16:45.199 [2024-04-24 21:31:10.618105] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1086870 (9): Bad file descriptor 00:16:45.199 [2024-04-24 21:31:10.618122] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149c380 (9): Bad file descriptor 00:16:45.199 [2024-04-24 21:31:10.618139] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14bc8f0 (9): Bad file descriptor 00:16:45.199 [2024-04-24 21:31:10.618201] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:16:45.199 [2024-04-24 21:31:10.618222] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:16:45.199 [2024-04-24 21:31:10.618237] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:16:45.199 [2024-04-24 21:31:10.618255] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:45.199 [2024-04-24 21:31:10.618268] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:16:45.199 [2024-04-24 21:31:10.618281] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:45.199 [2024-04-24 21:31:10.618296] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:16:45.199 [2024-04-24 21:31:10.618310] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:16:45.199 [2024-04-24 21:31:10.618324] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:16:45.199 [2024-04-24 21:31:10.618341] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:45.199 [2024-04-24 21:31:10.618355] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:45.199 [2024-04-24 21:31:10.618367] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:45.199 [2024-04-24 21:31:10.618383] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:16:45.199 [2024-04-24 21:31:10.618397] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:16:45.199 [2024-04-24 21:31:10.618411] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:16:45.199 [2024-04-24 21:31:10.618427] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:16:45.199 [2024-04-24 21:31:10.618441] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:16:45.199 [2024-04-24 21:31:10.618453] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:16:45.199 [2024-04-24 21:31:10.618493] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:45.199 [2024-04-24 21:31:10.618512] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:45.199 [2024-04-24 21:31:10.618524] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:45.199 [2024-04-24 21:31:10.618535] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:45.199 [2024-04-24 21:31:10.618547] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:45.199 [2024-04-24 21:31:10.618559] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:45.459 21:31:11 -- target/shutdown.sh@136 -- # nvmfpid= 00:16:45.459 21:31:11 -- target/shutdown.sh@139 -- # sleep 1 00:16:46.836 21:31:12 -- target/shutdown.sh@142 -- # kill -9 2627087 00:16:46.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2627087) - No such process 00:16:46.836 21:31:12 -- target/shutdown.sh@142 -- # true 00:16:46.836 21:31:12 -- target/shutdown.sh@144 -- # stoptarget 00:16:46.836 21:31:12 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:46.836 21:31:12 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:46.836 21:31:12 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:46.836 21:31:12 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:46.836 21:31:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:46.836 21:31:12 -- nvmf/common.sh@117 -- # sync 00:16:46.836 21:31:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:46.836 21:31:12 -- nvmf/common.sh@120 -- # set +e 00:16:46.836 21:31:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:46.836 21:31:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:46.836 rmmod nvme_tcp 00:16:46.836 rmmod nvme_fabrics 00:16:46.836 rmmod nvme_keyring 00:16:46.836 21:31:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:46.836 21:31:12 -- nvmf/common.sh@124 -- # set -e 00:16:46.836 21:31:12 -- nvmf/common.sh@125 -- # return 0 00:16:46.836 21:31:12 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:16:46.837 21:31:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:46.837 21:31:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:46.837 21:31:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:46.837 21:31:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:46.837 21:31:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:46.837 21:31:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.837 21:31:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.837 21:31:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.739 21:31:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:48.739 00:16:48.739 real 0m7.604s 00:16:48.739 user 0m18.979s 00:16:48.739 sys 0m1.419s 00:16:48.739 21:31:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:48.739 21:31:14 -- common/autotest_common.sh@10 -- # set +x 00:16:48.739 ************************************ 00:16:48.739 END TEST nvmf_shutdown_tc3 00:16:48.739 ************************************ 00:16:48.739 21:31:14 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:16:48.739 00:16:48.739 real 0m29.176s 00:16:48.739 user 1m23.308s 00:16:48.739 sys 0m6.615s 00:16:48.739 21:31:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:48.739 21:31:14 -- common/autotest_common.sh@10 -- # set +x 00:16:48.739 ************************************ 00:16:48.739 END TEST nvmf_shutdown 00:16:48.739 ************************************ 00:16:48.739 21:31:14 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:16:48.739 21:31:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:48.739 21:31:14 -- common/autotest_common.sh@10 -- # set +x 00:16:48.739 21:31:14 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:16:48.739 21:31:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:48.739 21:31:14 -- common/autotest_common.sh@10 -- # set +x 00:16:48.739 21:31:14 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:16:48.739 21:31:14 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:48.739 21:31:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:48.739 21:31:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:48.739 21:31:14 -- common/autotest_common.sh@10 -- # set +x 00:16:48.997 ************************************ 00:16:48.997 START TEST nvmf_multicontroller 00:16:48.997 ************************************ 00:16:48.997 21:31:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:48.997 * Looking for test storage... 00:16:48.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:16:48.997 21:31:14 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:48.997 21:31:14 -- nvmf/common.sh@7 -- # uname -s 00:16:48.997 21:31:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.997 21:31:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.997 21:31:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.997 21:31:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.997 21:31:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.997 21:31:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.997 21:31:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.997 21:31:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.997 21:31:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.997 21:31:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.997 21:31:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.997 21:31:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.997 21:31:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.997 21:31:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.997 21:31:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:48.997 21:31:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:48.997 21:31:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:48.997 21:31:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.997 21:31:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.997 21:31:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.998 21:31:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.998 21:31:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.998 21:31:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.998 21:31:14 -- paths/export.sh@5 -- # export PATH 00:16:48.998 21:31:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.998 21:31:14 -- nvmf/common.sh@47 -- # : 0 00:16:48.998 21:31:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:48.998 21:31:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:48.998 21:31:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:48.998 21:31:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.998 21:31:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.998 21:31:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:48.998 21:31:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:48.998 21:31:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:48.998 21:31:14 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:48.998 21:31:14 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:48.998 21:31:14 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:16:48.998 21:31:14 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:16:48.998 21:31:14 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:48.998 21:31:14 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:16:48.998 21:31:14 -- host/multicontroller.sh@23 -- # nvmftestinit 00:16:48.998 21:31:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:48.998 21:31:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.998 21:31:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:48.998 21:31:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:48.998 21:31:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:48.998 21:31:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.998 21:31:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.998 21:31:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.998 21:31:14 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:48.998 21:31:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:48.998 21:31:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:48.998 21:31:14 -- common/autotest_common.sh@10 -- # set +x 00:16:50.901 21:31:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:50.901 21:31:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:50.901 21:31:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:50.901 21:31:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:50.901 21:31:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:50.901 21:31:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:50.901 21:31:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:50.901 21:31:16 -- nvmf/common.sh@295 -- # net_devs=() 00:16:50.901 21:31:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:50.901 21:31:16 -- nvmf/common.sh@296 -- # e810=() 00:16:50.901 21:31:16 -- nvmf/common.sh@296 -- # local -ga e810 00:16:50.901 21:31:16 -- nvmf/common.sh@297 -- # x722=() 00:16:50.901 21:31:16 -- nvmf/common.sh@297 -- # local -ga x722 00:16:50.901 21:31:16 -- nvmf/common.sh@298 -- # mlx=() 00:16:50.901 21:31:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:50.901 21:31:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:50.901 21:31:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:50.901 21:31:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:50.901 21:31:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:50.901 21:31:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:50.901 21:31:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:50.901 21:31:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:50.901 21:31:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:50.901 21:31:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:50.901 21:31:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:50.901 21:31:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:50.901 21:31:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:50.901 21:31:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:50.901 21:31:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:50.901 21:31:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:50.901 21:31:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:50.901 21:31:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:50.901 21:31:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:50.901 21:31:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:50.901 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:50.901 21:31:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:50.901 21:31:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:50.901 21:31:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.901 21:31:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.901 21:31:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:50.901 21:31:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:50.901 21:31:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:50.901 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:50.901 21:31:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:50.901 21:31:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:50.901 21:31:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.901 21:31:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.901 21:31:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:50.901 21:31:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:50.901 21:31:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:50.901 21:31:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:50.901 21:31:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:50.901 21:31:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.901 21:31:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:50.901 21:31:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.901 21:31:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:50.901 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:50.901 21:31:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.901 21:31:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:50.901 21:31:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.901 21:31:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:50.901 21:31:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.901 21:31:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:50.901 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:50.902 21:31:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.902 21:31:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:50.902 21:31:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:50.902 21:31:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:50.902 21:31:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:50.902 21:31:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:50.902 21:31:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.902 21:31:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:50.902 21:31:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:50.902 21:31:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:50.902 21:31:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:50.902 21:31:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:50.902 21:31:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:50.902 21:31:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:50.902 21:31:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.902 21:31:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:50.902 21:31:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:50.902 21:31:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:50.902 21:31:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:50.902 21:31:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:50.902 21:31:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:50.902 21:31:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:50.902 21:31:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:50.902 21:31:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:50.902 21:31:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:50.902 21:31:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:50.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:50.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:16:50.902 00:16:50.902 --- 10.0.0.2 ping statistics --- 00:16:50.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.902 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:16:50.902 21:31:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:50.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:50.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:16:50.902 00:16:50.902 --- 10.0.0.1 ping statistics --- 00:16:50.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.902 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:16:50.902 21:31:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:50.902 21:31:16 -- nvmf/common.sh@411 -- # return 0 00:16:50.902 21:31:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:50.902 21:31:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:50.902 21:31:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:50.902 21:31:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:50.902 21:31:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:50.902 21:31:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:50.902 21:31:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:50.902 21:31:16 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:16:50.902 21:31:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:50.902 21:31:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:50.902 21:31:16 -- common/autotest_common.sh@10 -- # set +x 00:16:50.902 21:31:16 -- nvmf/common.sh@470 -- # nvmfpid=2629612 00:16:50.902 21:31:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:50.902 21:31:16 -- nvmf/common.sh@471 -- # waitforlisten 2629612 00:16:50.902 21:31:16 -- common/autotest_common.sh@817 -- # '[' -z 2629612 ']' 00:16:50.902 21:31:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.902 21:31:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:50.902 21:31:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.902 21:31:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:50.902 21:31:16 -- common/autotest_common.sh@10 -- # set +x 00:16:51.160 [2024-04-24 21:31:16.605468] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:16:51.160 [2024-04-24 21:31:16.605539] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.160 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.160 [2024-04-24 21:31:16.672009] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:51.160 [2024-04-24 21:31:16.775161] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.160 [2024-04-24 21:31:16.775220] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.160 [2024-04-24 21:31:16.775235] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.160 [2024-04-24 21:31:16.775247] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.160 [2024-04-24 21:31:16.775258] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.160 [2024-04-24 21:31:16.775324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.160 [2024-04-24 21:31:16.775385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:51.160 [2024-04-24 21:31:16.775388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.419 21:31:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:51.419 21:31:16 -- common/autotest_common.sh@850 -- # return 0 00:16:51.419 21:31:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:51.419 21:31:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:51.419 21:31:16 -- common/autotest_common.sh@10 -- # set +x 00:16:51.419 21:31:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.419 21:31:16 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:51.419 21:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.419 21:31:16 -- common/autotest_common.sh@10 -- # set +x 00:16:51.419 [2024-04-24 21:31:16.913527] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.419 21:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.419 21:31:16 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:51.419 21:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.419 21:31:16 -- common/autotest_common.sh@10 -- # set +x 00:16:51.419 Malloc0 00:16:51.419 21:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.419 21:31:16 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:51.419 21:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.419 21:31:16 -- common/autotest_common.sh@10 -- # set +x 00:16:51.419 21:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.419 21:31:16 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:51.419 21:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.419 21:31:16 -- common/autotest_common.sh@10 -- # set +x 00:16:51.419 21:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.419 21:31:16 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:51.419 21:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.419 21:31:16 -- common/autotest_common.sh@10 -- # set +x 00:16:51.419 [2024-04-24 21:31:16.976241] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.419 21:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.419 21:31:16 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:51.419 21:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.419 21:31:16 -- common/autotest_common.sh@10 -- # set +x 00:16:51.419 [2024-04-24 21:31:16.984137] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:51.419 21:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.419 21:31:16 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:51.419 21:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.419 21:31:16 -- common/autotest_common.sh@10 -- # set +x 00:16:51.419 Malloc1 00:16:51.419 21:31:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.419 21:31:17 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:51.419 21:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.419 21:31:17 -- common/autotest_common.sh@10 -- # set +x 00:16:51.419 21:31:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.419 21:31:17 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:16:51.419 21:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.419 21:31:17 -- common/autotest_common.sh@10 -- # set +x 00:16:51.419 21:31:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.419 21:31:17 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:51.419 21:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.419 21:31:17 -- common/autotest_common.sh@10 -- # set +x 00:16:51.419 21:31:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.419 21:31:17 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:16:51.419 21:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.419 21:31:17 -- common/autotest_common.sh@10 -- # set +x 00:16:51.419 21:31:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.419 21:31:17 -- host/multicontroller.sh@44 -- # bdevperf_pid=2629639 00:16:51.419 21:31:17 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:51.419 21:31:17 -- host/multicontroller.sh@47 -- # waitforlisten 2629639 /var/tmp/bdevperf.sock 00:16:51.419 21:31:17 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:16:51.419 21:31:17 -- common/autotest_common.sh@817 -- # '[' -z 2629639 ']' 00:16:51.419 21:31:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:51.419 21:31:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:51.419 21:31:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:51.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:51.419 21:31:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:51.419 21:31:17 -- common/autotest_common.sh@10 -- # set +x 00:16:51.986 21:31:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:51.986 21:31:17 -- common/autotest_common.sh@850 -- # return 0 00:16:51.986 21:31:17 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:51.986 21:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.986 21:31:17 -- common/autotest_common.sh@10 -- # set +x 00:16:51.986 NVMe0n1 00:16:51.986 21:31:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.986 21:31:17 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:51.986 21:31:17 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:16:51.986 21:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.986 21:31:17 -- common/autotest_common.sh@10 -- # set +x 00:16:51.986 21:31:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.986 1 00:16:51.986 21:31:17 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:51.986 21:31:17 -- common/autotest_common.sh@638 -- # local es=0 00:16:51.986 21:31:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:51.986 21:31:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:51.986 21:31:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:51.986 21:31:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:51.986 21:31:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:51.986 21:31:17 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:51.986 21:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.986 21:31:17 -- common/autotest_common.sh@10 -- # set +x 00:16:51.986 request: 00:16:51.986 { 00:16:51.986 "name": "NVMe0", 00:16:51.986 "trtype": "tcp", 00:16:51.986 "traddr": "10.0.0.2", 00:16:51.986 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:16:51.986 "hostaddr": "10.0.0.2", 00:16:51.986 "hostsvcid": "60000", 00:16:51.986 "adrfam": "ipv4", 00:16:51.986 "trsvcid": "4420", 00:16:51.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.986 "method": "bdev_nvme_attach_controller", 00:16:51.986 "req_id": 1 00:16:51.986 } 00:16:51.986 Got JSON-RPC error response 00:16:51.986 response: 00:16:51.986 { 00:16:51.986 "code": -114, 00:16:51.986 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:16:51.986 } 00:16:51.986 21:31:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:51.986 21:31:17 -- common/autotest_common.sh@641 -- # es=1 00:16:51.986 21:31:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:51.986 21:31:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:51.986 21:31:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:51.986 21:31:17 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:51.986 21:31:17 -- common/autotest_common.sh@638 -- # local es=0 00:16:51.986 21:31:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:51.986 21:31:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:51.986 21:31:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:51.986 21:31:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:51.986 21:31:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:51.986 21:31:17 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:51.986 21:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.986 21:31:17 -- common/autotest_common.sh@10 -- # set +x 00:16:51.986 request: 00:16:51.986 { 00:16:51.986 "name": "NVMe0", 00:16:51.986 "trtype": "tcp", 00:16:51.986 "traddr": "10.0.0.2", 00:16:51.986 "hostaddr": "10.0.0.2", 00:16:51.986 "hostsvcid": "60000", 00:16:51.986 "adrfam": "ipv4", 00:16:51.986 "trsvcid": "4420", 00:16:51.986 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:51.986 "method": "bdev_nvme_attach_controller", 00:16:51.986 "req_id": 1 00:16:51.986 } 00:16:51.986 Got JSON-RPC error response 00:16:51.986 response: 00:16:51.986 { 00:16:51.986 "code": -114, 00:16:51.986 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:16:51.986 } 00:16:51.986 21:31:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:51.986 21:31:17 -- common/autotest_common.sh@641 -- # es=1 00:16:51.987 21:31:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:51.987 21:31:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:51.987 21:31:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:51.987 21:31:17 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:51.987 21:31:17 -- common/autotest_common.sh@638 -- # local es=0 00:16:51.987 21:31:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:51.987 21:31:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:51.987 21:31:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:51.987 21:31:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:51.987 21:31:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:51.987 21:31:17 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:51.987 21:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.987 21:31:17 -- common/autotest_common.sh@10 -- # set +x 00:16:51.987 request: 00:16:51.987 { 00:16:51.987 "name": "NVMe0", 00:16:51.987 "trtype": "tcp", 00:16:51.987 "traddr": "10.0.0.2", 00:16:51.987 "hostaddr": "10.0.0.2", 00:16:51.987 "hostsvcid": "60000", 00:16:51.987 "adrfam": "ipv4", 00:16:51.987 "trsvcid": "4420", 00:16:51.987 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.987 "multipath": "disable", 00:16:51.987 "method": "bdev_nvme_attach_controller", 00:16:51.987 "req_id": 1 00:16:51.987 } 00:16:51.987 Got JSON-RPC error response 00:16:51.987 response: 00:16:51.987 { 00:16:51.987 "code": -114, 00:16:51.987 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:16:51.987 } 00:16:51.987 21:31:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:51.987 21:31:17 -- common/autotest_common.sh@641 -- # es=1 00:16:51.987 21:31:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:51.987 21:31:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:51.987 21:31:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:51.987 21:31:17 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:51.987 21:31:17 -- common/autotest_common.sh@638 -- # local es=0 00:16:51.987 21:31:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:51.987 21:31:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:51.987 21:31:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:51.987 21:31:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:51.987 21:31:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:51.987 21:31:17 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:51.987 21:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.987 21:31:17 -- common/autotest_common.sh@10 -- # set +x 00:16:51.987 request: 00:16:51.987 { 00:16:51.987 "name": "NVMe0", 00:16:51.987 "trtype": "tcp", 00:16:51.987 "traddr": "10.0.0.2", 00:16:51.987 "hostaddr": "10.0.0.2", 00:16:51.987 "hostsvcid": "60000", 00:16:51.987 "adrfam": "ipv4", 00:16:51.987 "trsvcid": "4420", 00:16:51.987 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.987 "multipath": "failover", 00:16:51.987 "method": "bdev_nvme_attach_controller", 00:16:51.987 "req_id": 1 00:16:51.987 } 00:16:51.987 Got JSON-RPC error response 00:16:51.987 response: 00:16:51.987 { 00:16:51.987 "code": -114, 00:16:51.987 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:16:51.987 } 00:16:51.987 21:31:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:51.987 21:31:17 -- common/autotest_common.sh@641 -- # es=1 00:16:51.987 21:31:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:51.987 21:31:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:51.987 21:31:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:51.987 21:31:17 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:51.987 21:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.987 21:31:17 -- common/autotest_common.sh@10 -- # set +x 00:16:52.244 00:16:52.244 21:31:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.244 21:31:17 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:52.244 21:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.244 21:31:17 -- common/autotest_common.sh@10 -- # set +x 00:16:52.244 21:31:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.244 21:31:17 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:52.244 21:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.244 21:31:17 -- common/autotest_common.sh@10 -- # set +x 00:16:52.502 00:16:52.502 21:31:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.502 21:31:17 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:52.502 21:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.502 21:31:17 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:16:52.502 21:31:17 -- common/autotest_common.sh@10 -- # set +x 00:16:52.502 21:31:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.502 21:31:17 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:16:52.502 21:31:17 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:53.435 0 00:16:53.435 21:31:19 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:16:53.435 21:31:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.435 21:31:19 -- common/autotest_common.sh@10 -- # set +x 00:16:53.435 21:31:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.435 21:31:19 -- host/multicontroller.sh@100 -- # killprocess 2629639 00:16:53.435 21:31:19 -- common/autotest_common.sh@936 -- # '[' -z 2629639 ']' 00:16:53.435 21:31:19 -- common/autotest_common.sh@940 -- # kill -0 2629639 00:16:53.435 21:31:19 -- common/autotest_common.sh@941 -- # uname 00:16:53.435 21:31:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:53.435 21:31:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2629639 00:16:53.693 21:31:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:53.693 21:31:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:53.693 21:31:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2629639' 00:16:53.693 killing process with pid 2629639 00:16:53.693 21:31:19 -- common/autotest_common.sh@955 -- # kill 2629639 00:16:53.693 21:31:19 -- common/autotest_common.sh@960 -- # wait 2629639 00:16:53.951 21:31:19 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:53.951 21:31:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.951 21:31:19 -- common/autotest_common.sh@10 -- # set +x 00:16:53.951 21:31:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.951 21:31:19 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:53.951 21:31:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.951 21:31:19 -- common/autotest_common.sh@10 -- # set +x 00:16:53.951 21:31:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.951 21:31:19 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:16:53.951 21:31:19 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:16:53.951 21:31:19 -- common/autotest_common.sh@1598 -- # read -r file 00:16:53.951 21:31:19 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:16:53.951 21:31:19 -- common/autotest_common.sh@1597 -- # sort -u 00:16:53.951 21:31:19 -- common/autotest_common.sh@1599 -- # cat 00:16:53.951 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:16:53.951 [2024-04-24 21:31:17.088046] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:16:53.951 [2024-04-24 21:31:17.088127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2629639 ] 00:16:53.951 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.951 [2024-04-24 21:31:17.147653] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.951 [2024-04-24 21:31:17.256064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.951 [2024-04-24 21:31:17.936220] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 4ea66609-4e70-477a-a2ab-f51f45c1613b already exists 00:16:53.951 [2024-04-24 21:31:17.936258] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:4ea66609-4e70-477a-a2ab-f51f45c1613b alias for bdev NVMe1n1 00:16:53.951 [2024-04-24 21:31:17.936275] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:16:53.951 Running I/O for 1 seconds... 00:16:53.951 00:16:53.951 Latency(us) 00:16:53.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.951 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:16:53.951 NVMe0n1 : 1.00 18483.16 72.20 0.00 0.00 6906.18 4393.34 14563.56 00:16:53.951 =================================================================================================================== 00:16:53.951 Total : 18483.16 72.20 0.00 0.00 6906.18 4393.34 14563.56 00:16:53.951 Received shutdown signal, test time was about 1.000000 seconds 00:16:53.951 00:16:53.951 Latency(us) 00:16:53.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.951 =================================================================================================================== 00:16:53.951 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:53.951 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:16:53.951 21:31:19 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:16:53.951 21:31:19 -- common/autotest_common.sh@1598 -- # read -r file 00:16:53.951 21:31:19 -- host/multicontroller.sh@108 -- # nvmftestfini 00:16:53.951 21:31:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:53.951 21:31:19 -- nvmf/common.sh@117 -- # sync 00:16:53.951 21:31:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:53.951 21:31:19 -- nvmf/common.sh@120 -- # set +e 00:16:53.951 21:31:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:53.951 21:31:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:53.951 rmmod nvme_tcp 00:16:53.951 rmmod nvme_fabrics 00:16:53.951 rmmod nvme_keyring 00:16:53.951 21:31:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:53.951 21:31:19 -- nvmf/common.sh@124 -- # set -e 00:16:53.951 21:31:19 -- nvmf/common.sh@125 -- # return 0 00:16:53.951 21:31:19 -- nvmf/common.sh@478 -- # '[' -n 2629612 ']' 00:16:53.951 21:31:19 -- nvmf/common.sh@479 -- # killprocess 2629612 00:16:53.951 21:31:19 -- common/autotest_common.sh@936 -- # '[' -z 2629612 ']' 00:16:53.951 21:31:19 -- common/autotest_common.sh@940 -- # kill -0 2629612 00:16:53.951 21:31:19 -- common/autotest_common.sh@941 -- # uname 00:16:53.951 21:31:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:53.951 21:31:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2629612 00:16:53.951 21:31:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:53.951 21:31:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:53.951 21:31:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2629612' 00:16:53.951 killing process with pid 2629612 00:16:53.951 21:31:19 -- common/autotest_common.sh@955 -- # kill 2629612 00:16:53.951 21:31:19 -- common/autotest_common.sh@960 -- # wait 2629612 00:16:54.209 21:31:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:54.209 21:31:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:54.209 21:31:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:54.209 21:31:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:54.209 21:31:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:54.209 21:31:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.209 21:31:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.209 21:31:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.740 21:31:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:56.741 00:16:56.741 real 0m7.447s 00:16:56.741 user 0m11.998s 00:16:56.741 sys 0m2.228s 00:16:56.741 21:31:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:56.741 21:31:21 -- common/autotest_common.sh@10 -- # set +x 00:16:56.741 ************************************ 00:16:56.741 END TEST nvmf_multicontroller 00:16:56.741 ************************************ 00:16:56.741 21:31:21 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:56.741 21:31:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:56.741 21:31:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:56.741 21:31:21 -- common/autotest_common.sh@10 -- # set +x 00:16:56.741 ************************************ 00:16:56.741 START TEST nvmf_aer 00:16:56.741 ************************************ 00:16:56.741 21:31:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:56.741 * Looking for test storage... 00:16:56.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:16:56.741 21:31:22 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.741 21:31:22 -- nvmf/common.sh@7 -- # uname -s 00:16:56.741 21:31:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.741 21:31:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.741 21:31:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.741 21:31:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.741 21:31:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.741 21:31:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.741 21:31:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.741 21:31:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.741 21:31:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.741 21:31:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.741 21:31:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:56.741 21:31:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:56.741 21:31:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.741 21:31:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.741 21:31:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.741 21:31:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.741 21:31:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.741 21:31:22 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.741 21:31:22 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.741 21:31:22 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.741 21:31:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.741 21:31:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.741 21:31:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.741 21:31:22 -- paths/export.sh@5 -- # export PATH 00:16:56.741 21:31:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.741 21:31:22 -- nvmf/common.sh@47 -- # : 0 00:16:56.741 21:31:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.741 21:31:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.741 21:31:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.741 21:31:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.741 21:31:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.741 21:31:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.741 21:31:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.741 21:31:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.741 21:31:22 -- host/aer.sh@11 -- # nvmftestinit 00:16:56.741 21:31:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:56.741 21:31:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.741 21:31:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:56.741 21:31:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:56.741 21:31:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:56.741 21:31:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.741 21:31:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.741 21:31:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.741 21:31:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:56.741 21:31:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:56.741 21:31:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:56.741 21:31:22 -- common/autotest_common.sh@10 -- # set +x 00:16:58.673 21:31:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:58.673 21:31:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:58.673 21:31:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:58.673 21:31:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:58.673 21:31:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:58.673 21:31:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:58.673 21:31:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:58.673 21:31:24 -- nvmf/common.sh@295 -- # net_devs=() 00:16:58.673 21:31:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:58.673 21:31:24 -- nvmf/common.sh@296 -- # e810=() 00:16:58.673 21:31:24 -- nvmf/common.sh@296 -- # local -ga e810 00:16:58.673 21:31:24 -- nvmf/common.sh@297 -- # x722=() 00:16:58.673 21:31:24 -- nvmf/common.sh@297 -- # local -ga x722 00:16:58.673 21:31:24 -- nvmf/common.sh@298 -- # mlx=() 00:16:58.673 21:31:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:58.673 21:31:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:58.673 21:31:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:58.673 21:31:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:58.673 21:31:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:58.673 21:31:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:58.673 21:31:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:58.673 21:31:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:58.673 21:31:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:58.673 21:31:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:58.673 21:31:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:58.673 21:31:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:58.673 21:31:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:58.673 21:31:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:58.673 21:31:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:58.673 21:31:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:58.673 21:31:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:58.673 21:31:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:58.673 21:31:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.673 21:31:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:58.673 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:58.673 21:31:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.673 21:31:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.673 21:31:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.673 21:31:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.673 21:31:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.673 21:31:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.673 21:31:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:58.673 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:58.673 21:31:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.673 21:31:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.673 21:31:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.673 21:31:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.673 21:31:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.673 21:31:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:58.673 21:31:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:58.673 21:31:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:58.673 21:31:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.673 21:31:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.673 21:31:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:58.673 21:31:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.673 21:31:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:58.673 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:58.673 21:31:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.673 21:31:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.673 21:31:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.673 21:31:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:58.673 21:31:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.673 21:31:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:58.673 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:58.673 21:31:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.673 21:31:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:58.673 21:31:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:58.673 21:31:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:58.673 21:31:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:58.673 21:31:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:58.673 21:31:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.673 21:31:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:58.673 21:31:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:58.673 21:31:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:58.673 21:31:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:58.673 21:31:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:58.673 21:31:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:58.673 21:31:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:58.673 21:31:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.673 21:31:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:58.673 21:31:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:58.673 21:31:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:58.673 21:31:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:58.673 21:31:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:58.673 21:31:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:58.673 21:31:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:58.673 21:31:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:58.673 21:31:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:58.673 21:31:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:58.673 21:31:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:58.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:58.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:16:58.673 00:16:58.673 --- 10.0.0.2 ping statistics --- 00:16:58.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.673 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:16:58.673 21:31:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:58.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:58.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:16:58.673 00:16:58.673 --- 10.0.0.1 ping statistics --- 00:16:58.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.673 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:16:58.673 21:31:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:58.673 21:31:24 -- nvmf/common.sh@411 -- # return 0 00:16:58.673 21:31:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:58.673 21:31:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:58.673 21:31:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:58.673 21:31:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:58.673 21:31:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:58.673 21:31:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:58.673 21:31:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:58.673 21:31:24 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:16:58.673 21:31:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:58.673 21:31:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:58.673 21:31:24 -- common/autotest_common.sh@10 -- # set +x 00:16:58.674 21:31:24 -- nvmf/common.sh@470 -- # nvmfpid=2631970 00:16:58.674 21:31:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:58.674 21:31:24 -- nvmf/common.sh@471 -- # waitforlisten 2631970 00:16:58.674 21:31:24 -- common/autotest_common.sh@817 -- # '[' -z 2631970 ']' 00:16:58.674 21:31:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.674 21:31:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:58.674 21:31:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.674 21:31:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:58.674 21:31:24 -- common/autotest_common.sh@10 -- # set +x 00:16:58.674 [2024-04-24 21:31:24.250019] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:16:58.674 [2024-04-24 21:31:24.250103] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.674 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.674 [2024-04-24 21:31:24.319819] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:58.932 [2024-04-24 21:31:24.436552] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.932 [2024-04-24 21:31:24.436603] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.932 [2024-04-24 21:31:24.436644] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.932 [2024-04-24 21:31:24.436659] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.932 [2024-04-24 21:31:24.436670] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.932 [2024-04-24 21:31:24.436745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.932 [2024-04-24 21:31:24.436780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.932 [2024-04-24 21:31:24.436836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:58.932 [2024-04-24 21:31:24.436839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.866 21:31:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:59.866 21:31:25 -- common/autotest_common.sh@850 -- # return 0 00:16:59.866 21:31:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:59.866 21:31:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:59.866 21:31:25 -- common/autotest_common.sh@10 -- # set +x 00:16:59.866 21:31:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.866 21:31:25 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:59.866 21:31:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.866 21:31:25 -- common/autotest_common.sh@10 -- # set +x 00:16:59.866 [2024-04-24 21:31:25.215391] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:59.866 21:31:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.866 21:31:25 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:16:59.866 21:31:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.866 21:31:25 -- common/autotest_common.sh@10 -- # set +x 00:16:59.866 Malloc0 00:16:59.866 21:31:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.866 21:31:25 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:16:59.866 21:31:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.866 21:31:25 -- common/autotest_common.sh@10 -- # set +x 00:16:59.866 21:31:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.866 21:31:25 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:59.866 21:31:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.866 21:31:25 -- common/autotest_common.sh@10 -- # set +x 00:16:59.866 21:31:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.866 21:31:25 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.866 21:31:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.866 21:31:25 -- common/autotest_common.sh@10 -- # set +x 00:16:59.866 [2024-04-24 21:31:25.269001] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.866 21:31:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.866 21:31:25 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:16:59.866 21:31:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.866 21:31:25 -- common/autotest_common.sh@10 -- # set +x 00:16:59.866 [2024-04-24 21:31:25.276709] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:16:59.866 [ 00:16:59.866 { 00:16:59.866 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:59.866 "subtype": "Discovery", 00:16:59.866 "listen_addresses": [], 00:16:59.866 "allow_any_host": true, 00:16:59.866 "hosts": [] 00:16:59.866 }, 00:16:59.866 { 00:16:59.866 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.866 "subtype": "NVMe", 00:16:59.866 "listen_addresses": [ 00:16:59.866 { 00:16:59.866 "transport": "TCP", 00:16:59.866 "trtype": "TCP", 00:16:59.866 "adrfam": "IPv4", 00:16:59.866 "traddr": "10.0.0.2", 00:16:59.866 "trsvcid": "4420" 00:16:59.866 } 00:16:59.866 ], 00:16:59.866 "allow_any_host": true, 00:16:59.866 "hosts": [], 00:16:59.866 "serial_number": "SPDK00000000000001", 00:16:59.866 "model_number": "SPDK bdev Controller", 00:16:59.866 "max_namespaces": 2, 00:16:59.866 "min_cntlid": 1, 00:16:59.866 "max_cntlid": 65519, 00:16:59.866 "namespaces": [ 00:16:59.866 { 00:16:59.866 "nsid": 1, 00:16:59.866 "bdev_name": "Malloc0", 00:16:59.866 "name": "Malloc0", 00:16:59.866 "nguid": "016A74C0CF574156A4E34D9BEE73AACF", 00:16:59.866 "uuid": "016a74c0-cf57-4156-a4e3-4d9bee73aacf" 00:16:59.866 } 00:16:59.866 ] 00:16:59.866 } 00:16:59.866 ] 00:16:59.866 21:31:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.866 21:31:25 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:59.866 21:31:25 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:16:59.866 21:31:25 -- host/aer.sh@33 -- # aerpid=2632128 00:16:59.866 21:31:25 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:16:59.866 21:31:25 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:16:59.866 21:31:25 -- common/autotest_common.sh@1251 -- # local i=0 00:16:59.866 21:31:25 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:59.866 21:31:25 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:16:59.866 21:31:25 -- common/autotest_common.sh@1254 -- # i=1 00:16:59.866 21:31:25 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:16:59.866 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.866 21:31:25 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:59.866 21:31:25 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:16:59.866 21:31:25 -- common/autotest_common.sh@1254 -- # i=2 00:16:59.866 21:31:25 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:16:59.866 21:31:25 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:59.866 21:31:25 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:59.866 21:31:25 -- common/autotest_common.sh@1262 -- # return 0 00:16:59.866 21:31:25 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:16:59.866 21:31:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.866 21:31:25 -- common/autotest_common.sh@10 -- # set +x 00:16:59.866 Malloc1 00:16:59.866 21:31:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.866 21:31:25 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:16:59.866 21:31:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.866 21:31:25 -- common/autotest_common.sh@10 -- # set +x 00:17:00.125 21:31:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.125 21:31:25 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:00.125 21:31:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.125 21:31:25 -- common/autotest_common.sh@10 -- # set +x 00:17:00.125 [ 00:17:00.125 { 00:17:00.125 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:00.125 "subtype": "Discovery", 00:17:00.125 "listen_addresses": [], 00:17:00.125 "allow_any_host": true, 00:17:00.125 "hosts": [] 00:17:00.125 }, 00:17:00.125 { 00:17:00.125 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:00.125 "subtype": "NVMe", 00:17:00.125 "listen_addresses": [ 00:17:00.125 { 00:17:00.125 "transport": "TCP", 00:17:00.125 "trtype": "TCP", 00:17:00.125 "adrfam": "IPv4", 00:17:00.125 "traddr": "10.0.0.2", 00:17:00.125 "trsvcid": "4420" 00:17:00.125 } 00:17:00.125 ], 00:17:00.125 "allow_any_host": true, 00:17:00.125 "hosts": [], 00:17:00.125 "serial_number": "SPDK00000000000001", 00:17:00.125 "model_number": "SPDK bdev Controller", 00:17:00.125 "max_namespaces": 2, 00:17:00.125 "min_cntlid": 1, 00:17:00.125 "max_cntlid": 65519, 00:17:00.125 "namespaces": [ 00:17:00.125 { 00:17:00.125 "nsid": 1, 00:17:00.125 "bdev_name": "Malloc0", 00:17:00.125 "name": "Malloc0", 00:17:00.125 "nguid": "016A74C0CF574156A4E34D9BEE73AACF", 00:17:00.125 "uuid": "016a74c0-cf57-4156-a4e3-4d9bee73aacf" 00:17:00.125 }, 00:17:00.125 { 00:17:00.125 "nsid": 2, 00:17:00.125 "bdev_name": "Malloc1", 00:17:00.125 "name": "Malloc1", 00:17:00.125 "nguid": "8F22CB6571824B5194FE32E35A7E371F", 00:17:00.125 "uuid": "8f22cb65-7182-4b51-94fe-32e35a7e371f" 00:17:00.125 } 00:17:00.125 ] 00:17:00.125 } 00:17:00.125 ] 00:17:00.125 21:31:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.125 21:31:25 -- host/aer.sh@43 -- # wait 2632128 00:17:00.125 Asynchronous Event Request test 00:17:00.125 Attaching to 10.0.0.2 00:17:00.125 Attached to 10.0.0.2 00:17:00.125 Registering asynchronous event callbacks... 00:17:00.125 Starting namespace attribute notice tests for all controllers... 00:17:00.125 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:00.125 aer_cb - Changed Namespace 00:17:00.125 Cleaning up... 00:17:00.125 21:31:25 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:00.125 21:31:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.125 21:31:25 -- common/autotest_common.sh@10 -- # set +x 00:17:00.125 21:31:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.125 21:31:25 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:00.125 21:31:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.125 21:31:25 -- common/autotest_common.sh@10 -- # set +x 00:17:00.125 21:31:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.125 21:31:25 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.125 21:31:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.125 21:31:25 -- common/autotest_common.sh@10 -- # set +x 00:17:00.125 21:31:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.125 21:31:25 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:00.125 21:31:25 -- host/aer.sh@51 -- # nvmftestfini 00:17:00.125 21:31:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:00.125 21:31:25 -- nvmf/common.sh@117 -- # sync 00:17:00.125 21:31:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:00.125 21:31:25 -- nvmf/common.sh@120 -- # set +e 00:17:00.125 21:31:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:00.125 21:31:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:00.125 rmmod nvme_tcp 00:17:00.125 rmmod nvme_fabrics 00:17:00.125 rmmod nvme_keyring 00:17:00.125 21:31:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:00.125 21:31:25 -- nvmf/common.sh@124 -- # set -e 00:17:00.125 21:31:25 -- nvmf/common.sh@125 -- # return 0 00:17:00.125 21:31:25 -- nvmf/common.sh@478 -- # '[' -n 2631970 ']' 00:17:00.125 21:31:25 -- nvmf/common.sh@479 -- # killprocess 2631970 00:17:00.125 21:31:25 -- common/autotest_common.sh@936 -- # '[' -z 2631970 ']' 00:17:00.125 21:31:25 -- common/autotest_common.sh@940 -- # kill -0 2631970 00:17:00.125 21:31:25 -- common/autotest_common.sh@941 -- # uname 00:17:00.125 21:31:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:00.125 21:31:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2631970 00:17:00.125 21:31:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:00.125 21:31:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:00.125 21:31:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2631970' 00:17:00.125 killing process with pid 2631970 00:17:00.125 21:31:25 -- common/autotest_common.sh@955 -- # kill 2631970 00:17:00.125 [2024-04-24 21:31:25.730782] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:17:00.125 21:31:25 -- common/autotest_common.sh@960 -- # wait 2631970 00:17:00.384 21:31:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:00.384 21:31:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:00.384 21:31:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:00.384 21:31:26 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:00.384 21:31:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:00.384 21:31:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.384 21:31:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.384 21:31:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.926 21:31:28 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:02.926 00:17:02.926 real 0m6.048s 00:17:02.926 user 0m6.986s 00:17:02.926 sys 0m1.936s 00:17:02.926 21:31:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:02.926 21:31:28 -- common/autotest_common.sh@10 -- # set +x 00:17:02.926 ************************************ 00:17:02.926 END TEST nvmf_aer 00:17:02.926 ************************************ 00:17:02.926 21:31:28 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:02.926 21:31:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:02.926 21:31:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:02.926 21:31:28 -- common/autotest_common.sh@10 -- # set +x 00:17:02.926 ************************************ 00:17:02.926 START TEST nvmf_async_init 00:17:02.926 ************************************ 00:17:02.926 21:31:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:02.926 * Looking for test storage... 00:17:02.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:02.926 21:31:28 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:02.926 21:31:28 -- nvmf/common.sh@7 -- # uname -s 00:17:02.926 21:31:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.926 21:31:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.926 21:31:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.926 21:31:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.926 21:31:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.926 21:31:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.926 21:31:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.926 21:31:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.926 21:31:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.926 21:31:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.926 21:31:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.926 21:31:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.926 21:31:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.926 21:31:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.926 21:31:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:02.926 21:31:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.926 21:31:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:02.926 21:31:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.926 21:31:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.926 21:31:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.926 21:31:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.926 21:31:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.926 21:31:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.926 21:31:28 -- paths/export.sh@5 -- # export PATH 00:17:02.926 21:31:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.926 21:31:28 -- nvmf/common.sh@47 -- # : 0 00:17:02.926 21:31:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:02.926 21:31:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:02.926 21:31:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.926 21:31:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.926 21:31:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.926 21:31:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:02.926 21:31:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:02.926 21:31:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:02.926 21:31:28 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:02.927 21:31:28 -- host/async_init.sh@14 -- # null_block_size=512 00:17:02.927 21:31:28 -- host/async_init.sh@15 -- # null_bdev=null0 00:17:02.927 21:31:28 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:02.927 21:31:28 -- host/async_init.sh@20 -- # uuidgen 00:17:02.927 21:31:28 -- host/async_init.sh@20 -- # tr -d - 00:17:02.927 21:31:28 -- host/async_init.sh@20 -- # nguid=7f9d531a9f794d4d8ca8996e47bf236e 00:17:02.927 21:31:28 -- host/async_init.sh@22 -- # nvmftestinit 00:17:02.927 21:31:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:02.927 21:31:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.927 21:31:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:02.927 21:31:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:02.927 21:31:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:02.927 21:31:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.927 21:31:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.927 21:31:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.927 21:31:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:02.927 21:31:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:02.927 21:31:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:02.927 21:31:28 -- common/autotest_common.sh@10 -- # set +x 00:17:04.831 21:31:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:04.831 21:31:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:04.831 21:31:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:04.831 21:31:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:04.831 21:31:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:04.831 21:31:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:04.831 21:31:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:04.831 21:31:30 -- nvmf/common.sh@295 -- # net_devs=() 00:17:04.831 21:31:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:04.831 21:31:30 -- nvmf/common.sh@296 -- # e810=() 00:17:04.831 21:31:30 -- nvmf/common.sh@296 -- # local -ga e810 00:17:04.831 21:31:30 -- nvmf/common.sh@297 -- # x722=() 00:17:04.831 21:31:30 -- nvmf/common.sh@297 -- # local -ga x722 00:17:04.831 21:31:30 -- nvmf/common.sh@298 -- # mlx=() 00:17:04.831 21:31:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:04.831 21:31:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:04.831 21:31:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:04.831 21:31:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:04.831 21:31:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:04.831 21:31:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:04.831 21:31:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:04.831 21:31:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:04.831 21:31:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:04.831 21:31:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:04.831 21:31:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:04.831 21:31:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:04.831 21:31:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:04.831 21:31:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:04.831 21:31:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:04.831 21:31:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:04.831 21:31:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:04.831 21:31:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:04.831 21:31:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:04.831 21:31:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:04.831 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:04.831 21:31:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:04.831 21:31:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:04.831 21:31:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.831 21:31:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.831 21:31:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:04.831 21:31:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:04.831 21:31:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:04.831 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:04.831 21:31:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:04.831 21:31:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:04.831 21:31:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.831 21:31:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.832 21:31:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:04.832 21:31:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:04.832 21:31:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:04.832 21:31:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:04.832 21:31:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:04.832 21:31:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.832 21:31:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:04.832 21:31:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.832 21:31:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:04.832 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:04.832 21:31:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.832 21:31:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:04.832 21:31:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.832 21:31:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:04.832 21:31:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.832 21:31:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:04.832 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:04.832 21:31:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.832 21:31:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:04.832 21:31:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:04.832 21:31:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:04.832 21:31:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:04.832 21:31:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:04.832 21:31:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.832 21:31:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.832 21:31:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:04.832 21:31:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:04.832 21:31:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:04.832 21:31:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:04.832 21:31:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:04.832 21:31:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:04.832 21:31:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.832 21:31:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:04.832 21:31:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:04.832 21:31:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:04.832 21:31:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:04.832 21:31:30 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:04.832 21:31:30 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:04.832 21:31:30 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:04.832 21:31:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:04.832 21:31:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:04.832 21:31:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:04.832 21:31:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:04.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:17:04.832 00:17:04.832 --- 10.0.0.2 ping statistics --- 00:17:04.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.832 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:17:04.832 21:31:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:04.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:17:04.832 00:17:04.832 --- 10.0.0.1 ping statistics --- 00:17:04.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.832 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:17:04.832 21:31:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.832 21:31:30 -- nvmf/common.sh@411 -- # return 0 00:17:04.832 21:31:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:04.832 21:31:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.832 21:31:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:04.832 21:31:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:04.832 21:31:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.832 21:31:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:04.832 21:31:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:04.832 21:31:30 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:04.832 21:31:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:04.832 21:31:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:04.832 21:31:30 -- common/autotest_common.sh@10 -- # set +x 00:17:04.832 21:31:30 -- nvmf/common.sh@470 -- # nvmfpid=2634073 00:17:04.832 21:31:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:04.832 21:31:30 -- nvmf/common.sh@471 -- # waitforlisten 2634073 00:17:04.832 21:31:30 -- common/autotest_common.sh@817 -- # '[' -z 2634073 ']' 00:17:04.832 21:31:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.832 21:31:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:04.832 21:31:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.832 21:31:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:04.832 21:31:30 -- common/autotest_common.sh@10 -- # set +x 00:17:04.832 [2024-04-24 21:31:30.467202] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:17:04.832 [2024-04-24 21:31:30.467293] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.832 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.090 [2024-04-24 21:31:30.534289] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.090 [2024-04-24 21:31:30.642650] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.090 [2024-04-24 21:31:30.642719] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.090 [2024-04-24 21:31:30.642747] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.090 [2024-04-24 21:31:30.642759] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.090 [2024-04-24 21:31:30.642769] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.090 [2024-04-24 21:31:30.642803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.090 21:31:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:05.090 21:31:30 -- common/autotest_common.sh@850 -- # return 0 00:17:05.090 21:31:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:05.090 21:31:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:05.090 21:31:30 -- common/autotest_common.sh@10 -- # set +x 00:17:05.348 21:31:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.348 21:31:30 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:17:05.348 21:31:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.348 21:31:30 -- common/autotest_common.sh@10 -- # set +x 00:17:05.348 [2024-04-24 21:31:30.792552] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.348 21:31:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.348 21:31:30 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:05.348 21:31:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.348 21:31:30 -- common/autotest_common.sh@10 -- # set +x 00:17:05.348 null0 00:17:05.348 21:31:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.348 21:31:30 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:05.348 21:31:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.348 21:31:30 -- common/autotest_common.sh@10 -- # set +x 00:17:05.348 21:31:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.348 21:31:30 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:05.348 21:31:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.348 21:31:30 -- common/autotest_common.sh@10 -- # set +x 00:17:05.348 21:31:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.348 21:31:30 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7f9d531a9f794d4d8ca8996e47bf236e 00:17:05.348 21:31:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.348 21:31:30 -- common/autotest_common.sh@10 -- # set +x 00:17:05.348 21:31:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.348 21:31:30 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:05.348 21:31:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.348 21:31:30 -- common/autotest_common.sh@10 -- # set +x 00:17:05.348 [2024-04-24 21:31:30.832857] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.348 21:31:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.348 21:31:30 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:05.348 21:31:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.348 21:31:30 -- common/autotest_common.sh@10 -- # set +x 00:17:05.607 nvme0n1 00:17:05.607 21:31:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.607 21:31:31 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:05.607 21:31:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.607 21:31:31 -- common/autotest_common.sh@10 -- # set +x 00:17:05.607 [ 00:17:05.607 { 00:17:05.607 "name": "nvme0n1", 00:17:05.607 "aliases": [ 00:17:05.607 "7f9d531a-9f79-4d4d-8ca8-996e47bf236e" 00:17:05.607 ], 00:17:05.607 "product_name": "NVMe disk", 00:17:05.607 "block_size": 512, 00:17:05.607 "num_blocks": 2097152, 00:17:05.607 "uuid": "7f9d531a-9f79-4d4d-8ca8-996e47bf236e", 00:17:05.607 "assigned_rate_limits": { 00:17:05.607 "rw_ios_per_sec": 0, 00:17:05.607 "rw_mbytes_per_sec": 0, 00:17:05.607 "r_mbytes_per_sec": 0, 00:17:05.607 "w_mbytes_per_sec": 0 00:17:05.607 }, 00:17:05.607 "claimed": false, 00:17:05.607 "zoned": false, 00:17:05.607 "supported_io_types": { 00:17:05.607 "read": true, 00:17:05.607 "write": true, 00:17:05.607 "unmap": false, 00:17:05.607 "write_zeroes": true, 00:17:05.607 "flush": true, 00:17:05.607 "reset": true, 00:17:05.607 "compare": true, 00:17:05.607 "compare_and_write": true, 00:17:05.607 "abort": true, 00:17:05.607 "nvme_admin": true, 00:17:05.607 "nvme_io": true 00:17:05.607 }, 00:17:05.607 "memory_domains": [ 00:17:05.607 { 00:17:05.607 "dma_device_id": "system", 00:17:05.607 "dma_device_type": 1 00:17:05.607 } 00:17:05.607 ], 00:17:05.607 "driver_specific": { 00:17:05.607 "nvme": [ 00:17:05.607 { 00:17:05.607 "trid": { 00:17:05.607 "trtype": "TCP", 00:17:05.607 "adrfam": "IPv4", 00:17:05.607 "traddr": "10.0.0.2", 00:17:05.607 "trsvcid": "4420", 00:17:05.607 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:05.607 }, 00:17:05.607 "ctrlr_data": { 00:17:05.607 "cntlid": 1, 00:17:05.607 "vendor_id": "0x8086", 00:17:05.607 "model_number": "SPDK bdev Controller", 00:17:05.607 "serial_number": "00000000000000000000", 00:17:05.607 "firmware_revision": "24.05", 00:17:05.607 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:05.607 "oacs": { 00:17:05.607 "security": 0, 00:17:05.607 "format": 0, 00:17:05.607 "firmware": 0, 00:17:05.607 "ns_manage": 0 00:17:05.607 }, 00:17:05.607 "multi_ctrlr": true, 00:17:05.607 "ana_reporting": false 00:17:05.607 }, 00:17:05.607 "vs": { 00:17:05.607 "nvme_version": "1.3" 00:17:05.607 }, 00:17:05.607 "ns_data": { 00:17:05.607 "id": 1, 00:17:05.607 "can_share": true 00:17:05.607 } 00:17:05.607 } 00:17:05.607 ], 00:17:05.607 "mp_policy": "active_passive" 00:17:05.607 } 00:17:05.607 } 00:17:05.607 ] 00:17:05.607 21:31:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.607 21:31:31 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:05.607 21:31:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.607 21:31:31 -- common/autotest_common.sh@10 -- # set +x 00:17:05.607 [2024-04-24 21:31:31.085448] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:05.607 [2024-04-24 21:31:31.085535] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26339f0 (9): Bad file descriptor 00:17:05.607 [2024-04-24 21:31:31.227774] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:05.607 21:31:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.607 21:31:31 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:05.607 21:31:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.607 21:31:31 -- common/autotest_common.sh@10 -- # set +x 00:17:05.607 [ 00:17:05.607 { 00:17:05.607 "name": "nvme0n1", 00:17:05.607 "aliases": [ 00:17:05.607 "7f9d531a-9f79-4d4d-8ca8-996e47bf236e" 00:17:05.607 ], 00:17:05.607 "product_name": "NVMe disk", 00:17:05.607 "block_size": 512, 00:17:05.607 "num_blocks": 2097152, 00:17:05.607 "uuid": "7f9d531a-9f79-4d4d-8ca8-996e47bf236e", 00:17:05.607 "assigned_rate_limits": { 00:17:05.607 "rw_ios_per_sec": 0, 00:17:05.607 "rw_mbytes_per_sec": 0, 00:17:05.607 "r_mbytes_per_sec": 0, 00:17:05.607 "w_mbytes_per_sec": 0 00:17:05.607 }, 00:17:05.607 "claimed": false, 00:17:05.607 "zoned": false, 00:17:05.607 "supported_io_types": { 00:17:05.607 "read": true, 00:17:05.607 "write": true, 00:17:05.607 "unmap": false, 00:17:05.607 "write_zeroes": true, 00:17:05.607 "flush": true, 00:17:05.607 "reset": true, 00:17:05.607 "compare": true, 00:17:05.607 "compare_and_write": true, 00:17:05.607 "abort": true, 00:17:05.607 "nvme_admin": true, 00:17:05.607 "nvme_io": true 00:17:05.607 }, 00:17:05.607 "memory_domains": [ 00:17:05.607 { 00:17:05.607 "dma_device_id": "system", 00:17:05.607 "dma_device_type": 1 00:17:05.607 } 00:17:05.607 ], 00:17:05.607 "driver_specific": { 00:17:05.607 "nvme": [ 00:17:05.607 { 00:17:05.607 "trid": { 00:17:05.607 "trtype": "TCP", 00:17:05.607 "adrfam": "IPv4", 00:17:05.607 "traddr": "10.0.0.2", 00:17:05.607 "trsvcid": "4420", 00:17:05.607 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:05.607 }, 00:17:05.607 "ctrlr_data": { 00:17:05.607 "cntlid": 2, 00:17:05.607 "vendor_id": "0x8086", 00:17:05.607 "model_number": "SPDK bdev Controller", 00:17:05.607 "serial_number": "00000000000000000000", 00:17:05.607 "firmware_revision": "24.05", 00:17:05.607 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:05.607 "oacs": { 00:17:05.607 "security": 0, 00:17:05.607 "format": 0, 00:17:05.607 "firmware": 0, 00:17:05.607 "ns_manage": 0 00:17:05.607 }, 00:17:05.607 "multi_ctrlr": true, 00:17:05.607 "ana_reporting": false 00:17:05.607 }, 00:17:05.607 "vs": { 00:17:05.607 "nvme_version": "1.3" 00:17:05.607 }, 00:17:05.607 "ns_data": { 00:17:05.607 "id": 1, 00:17:05.607 "can_share": true 00:17:05.607 } 00:17:05.607 } 00:17:05.607 ], 00:17:05.607 "mp_policy": "active_passive" 00:17:05.607 } 00:17:05.607 } 00:17:05.607 ] 00:17:05.607 21:31:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.607 21:31:31 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.607 21:31:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.607 21:31:31 -- common/autotest_common.sh@10 -- # set +x 00:17:05.607 21:31:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.607 21:31:31 -- host/async_init.sh@53 -- # mktemp 00:17:05.607 21:31:31 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.c8McgAfFlg 00:17:05.607 21:31:31 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:05.607 21:31:31 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.c8McgAfFlg 00:17:05.607 21:31:31 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:05.607 21:31:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.607 21:31:31 -- common/autotest_common.sh@10 -- # set +x 00:17:05.607 21:31:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.607 21:31:31 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:17:05.607 21:31:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.607 21:31:31 -- common/autotest_common.sh@10 -- # set +x 00:17:05.607 [2024-04-24 21:31:31.278113] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:05.607 [2024-04-24 21:31:31.278238] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:05.607 21:31:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.607 21:31:31 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.c8McgAfFlg 00:17:05.607 21:31:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.607 21:31:31 -- common/autotest_common.sh@10 -- # set +x 00:17:05.866 [2024-04-24 21:31:31.286141] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:05.866 21:31:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.866 21:31:31 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.c8McgAfFlg 00:17:05.866 21:31:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.866 21:31:31 -- common/autotest_common.sh@10 -- # set +x 00:17:05.866 [2024-04-24 21:31:31.294151] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:05.866 [2024-04-24 21:31:31.294208] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:05.866 nvme0n1 00:17:05.866 21:31:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.866 21:31:31 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:05.866 21:31:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.866 21:31:31 -- common/autotest_common.sh@10 -- # set +x 00:17:05.866 [ 00:17:05.866 { 00:17:05.866 "name": "nvme0n1", 00:17:05.866 "aliases": [ 00:17:05.866 "7f9d531a-9f79-4d4d-8ca8-996e47bf236e" 00:17:05.866 ], 00:17:05.866 "product_name": "NVMe disk", 00:17:05.866 "block_size": 512, 00:17:05.866 "num_blocks": 2097152, 00:17:05.866 "uuid": "7f9d531a-9f79-4d4d-8ca8-996e47bf236e", 00:17:05.866 "assigned_rate_limits": { 00:17:05.866 "rw_ios_per_sec": 0, 00:17:05.866 "rw_mbytes_per_sec": 0, 00:17:05.866 "r_mbytes_per_sec": 0, 00:17:05.866 "w_mbytes_per_sec": 0 00:17:05.866 }, 00:17:05.866 "claimed": false, 00:17:05.866 "zoned": false, 00:17:05.866 "supported_io_types": { 00:17:05.866 "read": true, 00:17:05.866 "write": true, 00:17:05.866 "unmap": false, 00:17:05.866 "write_zeroes": true, 00:17:05.866 "flush": true, 00:17:05.866 "reset": true, 00:17:05.866 "compare": true, 00:17:05.866 "compare_and_write": true, 00:17:05.866 "abort": true, 00:17:05.866 "nvme_admin": true, 00:17:05.866 "nvme_io": true 00:17:05.866 }, 00:17:05.866 "memory_domains": [ 00:17:05.866 { 00:17:05.866 "dma_device_id": "system", 00:17:05.866 "dma_device_type": 1 00:17:05.866 } 00:17:05.866 ], 00:17:05.866 "driver_specific": { 00:17:05.866 "nvme": [ 00:17:05.866 { 00:17:05.866 "trid": { 00:17:05.866 "trtype": "TCP", 00:17:05.866 "adrfam": "IPv4", 00:17:05.866 "traddr": "10.0.0.2", 00:17:05.867 "trsvcid": "4421", 00:17:05.867 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:05.867 }, 00:17:05.867 "ctrlr_data": { 00:17:05.867 "cntlid": 3, 00:17:05.867 "vendor_id": "0x8086", 00:17:05.867 "model_number": "SPDK bdev Controller", 00:17:05.867 "serial_number": "00000000000000000000", 00:17:05.867 "firmware_revision": "24.05", 00:17:05.867 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:05.867 "oacs": { 00:17:05.867 "security": 0, 00:17:05.867 "format": 0, 00:17:05.867 "firmware": 0, 00:17:05.867 "ns_manage": 0 00:17:05.867 }, 00:17:05.867 "multi_ctrlr": true, 00:17:05.867 "ana_reporting": false 00:17:05.867 }, 00:17:05.867 "vs": { 00:17:05.867 "nvme_version": "1.3" 00:17:05.867 }, 00:17:05.867 "ns_data": { 00:17:05.867 "id": 1, 00:17:05.867 "can_share": true 00:17:05.867 } 00:17:05.867 } 00:17:05.867 ], 00:17:05.867 "mp_policy": "active_passive" 00:17:05.867 } 00:17:05.867 } 00:17:05.867 ] 00:17:05.867 21:31:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.867 21:31:31 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.867 21:31:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.867 21:31:31 -- common/autotest_common.sh@10 -- # set +x 00:17:05.867 21:31:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.867 21:31:31 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.c8McgAfFlg 00:17:05.867 21:31:31 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:17:05.867 21:31:31 -- host/async_init.sh@78 -- # nvmftestfini 00:17:05.867 21:31:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:05.867 21:31:31 -- nvmf/common.sh@117 -- # sync 00:17:05.867 21:31:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:05.867 21:31:31 -- nvmf/common.sh@120 -- # set +e 00:17:05.867 21:31:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:05.867 21:31:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:05.867 rmmod nvme_tcp 00:17:05.867 rmmod nvme_fabrics 00:17:05.867 rmmod nvme_keyring 00:17:05.867 21:31:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:05.867 21:31:31 -- nvmf/common.sh@124 -- # set -e 00:17:05.867 21:31:31 -- nvmf/common.sh@125 -- # return 0 00:17:05.867 21:31:31 -- nvmf/common.sh@478 -- # '[' -n 2634073 ']' 00:17:05.867 21:31:31 -- nvmf/common.sh@479 -- # killprocess 2634073 00:17:05.867 21:31:31 -- common/autotest_common.sh@936 -- # '[' -z 2634073 ']' 00:17:05.867 21:31:31 -- common/autotest_common.sh@940 -- # kill -0 2634073 00:17:05.867 21:31:31 -- common/autotest_common.sh@941 -- # uname 00:17:05.867 21:31:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:05.867 21:31:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2634073 00:17:05.867 21:31:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:05.867 21:31:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:05.867 21:31:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2634073' 00:17:05.867 killing process with pid 2634073 00:17:05.867 21:31:31 -- common/autotest_common.sh@955 -- # kill 2634073 00:17:05.867 [2024-04-24 21:31:31.465337] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:05.867 [2024-04-24 21:31:31.465374] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:05.867 21:31:31 -- common/autotest_common.sh@960 -- # wait 2634073 00:17:06.163 21:31:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:06.163 21:31:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:06.163 21:31:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:06.163 21:31:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:06.163 21:31:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:06.163 21:31:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.163 21:31:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.163 21:31:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.699 21:31:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:08.699 00:17:08.699 real 0m5.571s 00:17:08.699 user 0m2.110s 00:17:08.699 sys 0m1.835s 00:17:08.699 21:31:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:08.699 21:31:33 -- common/autotest_common.sh@10 -- # set +x 00:17:08.699 ************************************ 00:17:08.699 END TEST nvmf_async_init 00:17:08.699 ************************************ 00:17:08.699 21:31:33 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:08.699 21:31:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:08.699 21:31:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:08.699 21:31:33 -- common/autotest_common.sh@10 -- # set +x 00:17:08.699 ************************************ 00:17:08.699 START TEST dma 00:17:08.699 ************************************ 00:17:08.699 21:31:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:08.699 * Looking for test storage... 00:17:08.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:08.699 21:31:33 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.699 21:31:33 -- nvmf/common.sh@7 -- # uname -s 00:17:08.699 21:31:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.699 21:31:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.699 21:31:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.699 21:31:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.699 21:31:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.699 21:31:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.699 21:31:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.699 21:31:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.699 21:31:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.699 21:31:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.699 21:31:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.699 21:31:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.699 21:31:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.699 21:31:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.699 21:31:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.699 21:31:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.699 21:31:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.699 21:31:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.699 21:31:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.699 21:31:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.699 21:31:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.699 21:31:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.699 21:31:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.699 21:31:33 -- paths/export.sh@5 -- # export PATH 00:17:08.699 21:31:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.699 21:31:33 -- nvmf/common.sh@47 -- # : 0 00:17:08.699 21:31:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:08.699 21:31:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:08.699 21:31:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.699 21:31:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.699 21:31:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.699 21:31:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:08.699 21:31:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:08.699 21:31:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:08.699 21:31:33 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:17:08.699 21:31:33 -- host/dma.sh@13 -- # exit 0 00:17:08.699 00:17:08.699 real 0m0.072s 00:17:08.699 user 0m0.034s 00:17:08.699 sys 0m0.043s 00:17:08.700 21:31:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:08.700 21:31:33 -- common/autotest_common.sh@10 -- # set +x 00:17:08.700 ************************************ 00:17:08.700 END TEST dma 00:17:08.700 ************************************ 00:17:08.700 21:31:33 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:08.700 21:31:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:08.700 21:31:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:08.700 21:31:33 -- common/autotest_common.sh@10 -- # set +x 00:17:08.700 ************************************ 00:17:08.700 START TEST nvmf_identify 00:17:08.700 ************************************ 00:17:08.700 21:31:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:08.700 * Looking for test storage... 00:17:08.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:08.700 21:31:34 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.700 21:31:34 -- nvmf/common.sh@7 -- # uname -s 00:17:08.700 21:31:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.700 21:31:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.700 21:31:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.700 21:31:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.700 21:31:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.700 21:31:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.700 21:31:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.700 21:31:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.700 21:31:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.700 21:31:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.700 21:31:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.700 21:31:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.700 21:31:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.700 21:31:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.700 21:31:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.700 21:31:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.700 21:31:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.700 21:31:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.700 21:31:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.700 21:31:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.700 21:31:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.700 21:31:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.700 21:31:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.700 21:31:34 -- paths/export.sh@5 -- # export PATH 00:17:08.700 21:31:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.700 21:31:34 -- nvmf/common.sh@47 -- # : 0 00:17:08.700 21:31:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:08.700 21:31:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:08.700 21:31:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.700 21:31:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.700 21:31:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.700 21:31:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:08.700 21:31:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:08.700 21:31:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:08.700 21:31:34 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:08.700 21:31:34 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:08.700 21:31:34 -- host/identify.sh@14 -- # nvmftestinit 00:17:08.700 21:31:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:08.700 21:31:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.700 21:31:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:08.700 21:31:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:08.700 21:31:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:08.700 21:31:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.700 21:31:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.700 21:31:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.700 21:31:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:08.700 21:31:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:08.700 21:31:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:08.700 21:31:34 -- common/autotest_common.sh@10 -- # set +x 00:17:10.602 21:31:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:10.602 21:31:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:10.602 21:31:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:10.602 21:31:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:10.602 21:31:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:10.602 21:31:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:10.602 21:31:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:10.602 21:31:36 -- nvmf/common.sh@295 -- # net_devs=() 00:17:10.602 21:31:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:10.602 21:31:36 -- nvmf/common.sh@296 -- # e810=() 00:17:10.602 21:31:36 -- nvmf/common.sh@296 -- # local -ga e810 00:17:10.602 21:31:36 -- nvmf/common.sh@297 -- # x722=() 00:17:10.602 21:31:36 -- nvmf/common.sh@297 -- # local -ga x722 00:17:10.602 21:31:36 -- nvmf/common.sh@298 -- # mlx=() 00:17:10.602 21:31:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:10.602 21:31:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:10.602 21:31:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:10.602 21:31:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:10.602 21:31:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:10.602 21:31:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:10.602 21:31:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:10.602 21:31:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:10.602 21:31:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:10.602 21:31:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:10.602 21:31:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:10.602 21:31:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:10.602 21:31:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:10.602 21:31:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:10.602 21:31:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:10.602 21:31:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:10.602 21:31:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:10.602 21:31:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:10.603 21:31:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.603 21:31:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:10.603 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:10.603 21:31:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.603 21:31:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.603 21:31:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.603 21:31:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.603 21:31:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.603 21:31:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.603 21:31:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:10.603 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:10.603 21:31:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.603 21:31:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.603 21:31:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.603 21:31:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.603 21:31:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.603 21:31:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:10.603 21:31:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:10.603 21:31:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:10.603 21:31:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.603 21:31:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.603 21:31:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:10.603 21:31:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.603 21:31:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:10.603 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:10.603 21:31:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.603 21:31:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.603 21:31:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.603 21:31:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:10.603 21:31:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.603 21:31:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:10.603 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:10.603 21:31:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.603 21:31:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:10.603 21:31:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:10.603 21:31:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:10.603 21:31:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:10.603 21:31:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:10.603 21:31:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.603 21:31:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.603 21:31:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:10.603 21:31:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:10.603 21:31:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:10.603 21:31:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:10.603 21:31:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:10.603 21:31:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:10.603 21:31:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.603 21:31:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:10.603 21:31:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:10.603 21:31:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:10.603 21:31:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:10.603 21:31:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:10.603 21:31:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:10.603 21:31:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:10.603 21:31:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:10.603 21:31:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:10.603 21:31:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:10.603 21:31:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:10.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:17:10.603 00:17:10.603 --- 10.0.0.2 ping statistics --- 00:17:10.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.603 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:17:10.603 21:31:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:10.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:17:10.603 00:17:10.603 --- 10.0.0.1 ping statistics --- 00:17:10.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.603 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:17:10.603 21:31:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.603 21:31:36 -- nvmf/common.sh@411 -- # return 0 00:17:10.603 21:31:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:10.603 21:31:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.603 21:31:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:10.603 21:31:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:10.603 21:31:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.603 21:31:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:10.603 21:31:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:10.862 21:31:36 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:10.862 21:31:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:10.862 21:31:36 -- common/autotest_common.sh@10 -- # set +x 00:17:10.862 21:31:36 -- host/identify.sh@19 -- # nvmfpid=2636219 00:17:10.862 21:31:36 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:10.862 21:31:36 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:10.862 21:31:36 -- host/identify.sh@23 -- # waitforlisten 2636219 00:17:10.862 21:31:36 -- common/autotest_common.sh@817 -- # '[' -z 2636219 ']' 00:17:10.862 21:31:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.862 21:31:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:10.862 21:31:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.862 21:31:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:10.862 21:31:36 -- common/autotest_common.sh@10 -- # set +x 00:17:10.862 [2024-04-24 21:31:36.345367] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:17:10.862 [2024-04-24 21:31:36.345436] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.862 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.862 [2024-04-24 21:31:36.414785] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:10.862 [2024-04-24 21:31:36.532505] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.862 [2024-04-24 21:31:36.532562] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.862 [2024-04-24 21:31:36.532586] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.862 [2024-04-24 21:31:36.532597] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.862 [2024-04-24 21:31:36.532607] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.862 [2024-04-24 21:31:36.532749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.862 [2024-04-24 21:31:36.532830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.862 [2024-04-24 21:31:36.532930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:10.862 [2024-04-24 21:31:36.532933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.120 21:31:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:11.120 21:31:36 -- common/autotest_common.sh@850 -- # return 0 00:17:11.121 21:31:36 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:11.121 21:31:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.121 21:31:36 -- common/autotest_common.sh@10 -- # set +x 00:17:11.121 [2024-04-24 21:31:36.675425] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.121 21:31:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.121 21:31:36 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:11.121 21:31:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:11.121 21:31:36 -- common/autotest_common.sh@10 -- # set +x 00:17:11.121 21:31:36 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:11.121 21:31:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.121 21:31:36 -- common/autotest_common.sh@10 -- # set +x 00:17:11.121 Malloc0 00:17:11.121 21:31:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.121 21:31:36 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:11.121 21:31:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.121 21:31:36 -- common/autotest_common.sh@10 -- # set +x 00:17:11.121 21:31:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.121 21:31:36 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:11.121 21:31:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.121 21:31:36 -- common/autotest_common.sh@10 -- # set +x 00:17:11.121 21:31:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.121 21:31:36 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:11.121 21:31:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.121 21:31:36 -- common/autotest_common.sh@10 -- # set +x 00:17:11.121 [2024-04-24 21:31:36.752036] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.121 21:31:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.121 21:31:36 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:11.121 21:31:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.121 21:31:36 -- common/autotest_common.sh@10 -- # set +x 00:17:11.121 21:31:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.121 21:31:36 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:11.121 21:31:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.121 21:31:36 -- common/autotest_common.sh@10 -- # set +x 00:17:11.121 [2024-04-24 21:31:36.767751] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:17:11.121 [ 00:17:11.121 { 00:17:11.121 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:11.121 "subtype": "Discovery", 00:17:11.121 "listen_addresses": [ 00:17:11.121 { 00:17:11.121 "transport": "TCP", 00:17:11.121 "trtype": "TCP", 00:17:11.121 "adrfam": "IPv4", 00:17:11.121 "traddr": "10.0.0.2", 00:17:11.121 "trsvcid": "4420" 00:17:11.121 } 00:17:11.121 ], 00:17:11.121 "allow_any_host": true, 00:17:11.121 "hosts": [] 00:17:11.121 }, 00:17:11.121 { 00:17:11.121 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.121 "subtype": "NVMe", 00:17:11.121 "listen_addresses": [ 00:17:11.121 { 00:17:11.121 "transport": "TCP", 00:17:11.121 "trtype": "TCP", 00:17:11.121 "adrfam": "IPv4", 00:17:11.121 "traddr": "10.0.0.2", 00:17:11.121 "trsvcid": "4420" 00:17:11.121 } 00:17:11.121 ], 00:17:11.121 "allow_any_host": true, 00:17:11.121 "hosts": [], 00:17:11.121 "serial_number": "SPDK00000000000001", 00:17:11.121 "model_number": "SPDK bdev Controller", 00:17:11.121 "max_namespaces": 32, 00:17:11.121 "min_cntlid": 1, 00:17:11.121 "max_cntlid": 65519, 00:17:11.121 "namespaces": [ 00:17:11.121 { 00:17:11.121 "nsid": 1, 00:17:11.121 "bdev_name": "Malloc0", 00:17:11.121 "name": "Malloc0", 00:17:11.121 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:11.121 "eui64": "ABCDEF0123456789", 00:17:11.121 "uuid": "235537d4-21ed-411e-864d-b4ee3e479562" 00:17:11.121 } 00:17:11.121 ] 00:17:11.121 } 00:17:11.121 ] 00:17:11.121 21:31:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.121 21:31:36 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:11.121 [2024-04-24 21:31:36.794064] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:17:11.121 [2024-04-24 21:31:36.794108] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2636369 ] 00:17:11.390 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.390 [2024-04-24 21:31:36.831035] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:11.390 [2024-04-24 21:31:36.831096] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:11.390 [2024-04-24 21:31:36.831106] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:11.390 [2024-04-24 21:31:36.831121] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:11.390 [2024-04-24 21:31:36.831134] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:11.390 [2024-04-24 21:31:36.831444] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:11.390 [2024-04-24 21:31:36.831499] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15add00 0 00:17:11.390 [2024-04-24 21:31:36.837658] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:11.390 [2024-04-24 21:31:36.837678] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:11.390 [2024-04-24 21:31:36.837687] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:11.390 [2024-04-24 21:31:36.837693] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:11.390 [2024-04-24 21:31:36.837760] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.390 [2024-04-24 21:31:36.837772] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.390 [2024-04-24 21:31:36.837780] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15add00) 00:17:11.390 [2024-04-24 21:31:36.837800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:11.390 [2024-04-24 21:31:36.837826] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160cec0, cid 0, qid 0 00:17:11.390 [2024-04-24 21:31:36.845644] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.390 [2024-04-24 21:31:36.845663] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.390 [2024-04-24 21:31:36.845670] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.390 [2024-04-24 21:31:36.845679] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160cec0) on tqpair=0x15add00 00:17:11.390 [2024-04-24 21:31:36.845701] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:11.390 [2024-04-24 21:31:36.845713] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:11.390 [2024-04-24 21:31:36.845723] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:11.390 [2024-04-24 21:31:36.845744] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.390 [2024-04-24 21:31:36.845753] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.390 [2024-04-24 21:31:36.845760] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15add00) 00:17:11.390 [2024-04-24 21:31:36.845772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.390 [2024-04-24 21:31:36.845795] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160cec0, cid 0, qid 0 00:17:11.390 [2024-04-24 21:31:36.846007] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.390 [2024-04-24 21:31:36.846022] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.390 [2024-04-24 21:31:36.846029] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.390 [2024-04-24 21:31:36.846036] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160cec0) on tqpair=0x15add00 00:17:11.390 [2024-04-24 21:31:36.846046] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:11.390 [2024-04-24 21:31:36.846066] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:11.390 [2024-04-24 21:31:36.846080] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.390 [2024-04-24 21:31:36.846088] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.390 [2024-04-24 21:31:36.846095] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15add00) 00:17:11.390 [2024-04-24 21:31:36.846105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.390 [2024-04-24 21:31:36.846126] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160cec0, cid 0, qid 0 00:17:11.390 [2024-04-24 21:31:36.846320] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.390 [2024-04-24 21:31:36.846332] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.390 [2024-04-24 21:31:36.846339] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.390 [2024-04-24 21:31:36.846346] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160cec0) on tqpair=0x15add00 00:17:11.390 [2024-04-24 21:31:36.846356] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:11.390 [2024-04-24 21:31:36.846370] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:11.390 [2024-04-24 21:31:36.846382] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.390 [2024-04-24 21:31:36.846390] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.390 [2024-04-24 21:31:36.846396] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15add00) 00:17:11.390 [2024-04-24 21:31:36.846407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.390 [2024-04-24 21:31:36.846427] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160cec0, cid 0, qid 0 00:17:11.390 [2024-04-24 21:31:36.846583] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.390 [2024-04-24 21:31:36.846599] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.390 [2024-04-24 21:31:36.846606] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.390 [2024-04-24 21:31:36.846612] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160cec0) on tqpair=0x15add00 00:17:11.390 [2024-04-24 21:31:36.846623] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:11.390 [2024-04-24 21:31:36.846649] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.390 [2024-04-24 21:31:36.846660] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.390 [2024-04-24 21:31:36.846667] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15add00) 00:17:11.390 [2024-04-24 21:31:36.846677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.391 [2024-04-24 21:31:36.846699] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160cec0, cid 0, qid 0 00:17:11.391 [2024-04-24 21:31:36.846891] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.391 [2024-04-24 21:31:36.846903] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.391 [2024-04-24 21:31:36.846910] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.846917] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160cec0) on tqpair=0x15add00 00:17:11.391 [2024-04-24 21:31:36.846926] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:11.391 [2024-04-24 21:31:36.846935] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:11.391 [2024-04-24 21:31:36.846948] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:11.391 [2024-04-24 21:31:36.847062] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:11.391 [2024-04-24 21:31:36.847071] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:11.391 [2024-04-24 21:31:36.847086] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.847093] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.847115] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15add00) 00:17:11.391 [2024-04-24 21:31:36.847126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.391 [2024-04-24 21:31:36.847147] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160cec0, cid 0, qid 0 00:17:11.391 [2024-04-24 21:31:36.847352] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.391 [2024-04-24 21:31:36.847364] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.391 [2024-04-24 21:31:36.847371] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.847378] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160cec0) on tqpair=0x15add00 00:17:11.391 [2024-04-24 21:31:36.847388] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:11.391 [2024-04-24 21:31:36.847405] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.847414] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.847420] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15add00) 00:17:11.391 [2024-04-24 21:31:36.847431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.391 [2024-04-24 21:31:36.847451] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160cec0, cid 0, qid 0 00:17:11.391 [2024-04-24 21:31:36.847609] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.391 [2024-04-24 21:31:36.847624] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.391 [2024-04-24 21:31:36.847639] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.847647] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160cec0) on tqpair=0x15add00 00:17:11.391 [2024-04-24 21:31:36.847656] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:11.391 [2024-04-24 21:31:36.847665] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:11.391 [2024-04-24 21:31:36.847678] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:11.391 [2024-04-24 21:31:36.847693] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:11.391 [2024-04-24 21:31:36.847712] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.847721] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15add00) 00:17:11.391 [2024-04-24 21:31:36.847732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.391 [2024-04-24 21:31:36.847753] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160cec0, cid 0, qid 0 00:17:11.391 [2024-04-24 21:31:36.847991] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:11.391 [2024-04-24 21:31:36.848003] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:11.391 [2024-04-24 21:31:36.848014] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.848021] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15add00): datao=0, datal=4096, cccid=0 00:17:11.391 [2024-04-24 21:31:36.848029] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x160cec0) on tqpair(0x15add00): expected_datao=0, payload_size=4096 00:17:11.391 [2024-04-24 21:31:36.848038] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.848049] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.848058] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.848108] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.391 [2024-04-24 21:31:36.848119] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.391 [2024-04-24 21:31:36.848126] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.848133] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160cec0) on tqpair=0x15add00 00:17:11.391 [2024-04-24 21:31:36.848146] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:11.391 [2024-04-24 21:31:36.848156] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:11.391 [2024-04-24 21:31:36.848164] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:11.391 [2024-04-24 21:31:36.848172] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:11.391 [2024-04-24 21:31:36.848180] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:11.391 [2024-04-24 21:31:36.848188] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:11.391 [2024-04-24 21:31:36.848203] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:11.391 [2024-04-24 21:31:36.848215] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.848223] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.848230] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15add00) 00:17:11.391 [2024-04-24 21:31:36.848241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:11.391 [2024-04-24 21:31:36.848262] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160cec0, cid 0, qid 0 00:17:11.391 [2024-04-24 21:31:36.848454] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.391 [2024-04-24 21:31:36.848466] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.391 [2024-04-24 21:31:36.848473] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.848480] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160cec0) on tqpair=0x15add00 00:17:11.391 [2024-04-24 21:31:36.848493] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.848501] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.848507] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15add00) 00:17:11.391 [2024-04-24 21:31:36.848517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.391 [2024-04-24 21:31:36.848528] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.848535] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.848541] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15add00) 00:17:11.391 [2024-04-24 21:31:36.848550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.391 [2024-04-24 21:31:36.848564] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.848572] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.848579] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15add00) 00:17:11.391 [2024-04-24 21:31:36.848588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.391 [2024-04-24 21:31:36.848598] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.848605] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.391 [2024-04-24 21:31:36.848611] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15add00) 00:17:11.392 [2024-04-24 21:31:36.848620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.392 [2024-04-24 21:31:36.848637] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:11.392 [2024-04-24 21:31:36.848658] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:11.392 [2024-04-24 21:31:36.848671] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.848678] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15add00) 00:17:11.392 [2024-04-24 21:31:36.848689] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.392 [2024-04-24 21:31:36.848712] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160cec0, cid 0, qid 0 00:17:11.392 [2024-04-24 21:31:36.848723] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d020, cid 1, qid 0 00:17:11.392 [2024-04-24 21:31:36.848731] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d180, cid 2, qid 0 00:17:11.392 [2024-04-24 21:31:36.848739] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d2e0, cid 3, qid 0 00:17:11.392 [2024-04-24 21:31:36.848747] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d440, cid 4, qid 0 00:17:11.392 [2024-04-24 21:31:36.848970] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.392 [2024-04-24 21:31:36.848985] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.392 [2024-04-24 21:31:36.848992] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.848999] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160d440) on tqpair=0x15add00 00:17:11.392 [2024-04-24 21:31:36.849009] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:11.392 [2024-04-24 21:31:36.849018] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:11.392 [2024-04-24 21:31:36.849036] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.849046] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15add00) 00:17:11.392 [2024-04-24 21:31:36.849057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.392 [2024-04-24 21:31:36.849077] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d440, cid 4, qid 0 00:17:11.392 [2024-04-24 21:31:36.849282] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:11.392 [2024-04-24 21:31:36.849294] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:11.392 [2024-04-24 21:31:36.849301] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.849308] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15add00): datao=0, datal=4096, cccid=4 00:17:11.392 [2024-04-24 21:31:36.849316] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x160d440) on tqpair(0x15add00): expected_datao=0, payload_size=4096 00:17:11.392 [2024-04-24 21:31:36.849328] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.849362] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.849371] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.849483] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.392 [2024-04-24 21:31:36.849498] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.392 [2024-04-24 21:31:36.849505] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.849512] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160d440) on tqpair=0x15add00 00:17:11.392 [2024-04-24 21:31:36.849533] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:11.392 [2024-04-24 21:31:36.849563] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.849573] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15add00) 00:17:11.392 [2024-04-24 21:31:36.849584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.392 [2024-04-24 21:31:36.849596] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.849603] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.849609] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15add00) 00:17:11.392 [2024-04-24 21:31:36.849619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.392 [2024-04-24 21:31:36.853669] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d440, cid 4, qid 0 00:17:11.392 [2024-04-24 21:31:36.853684] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d5a0, cid 5, qid 0 00:17:11.392 [2024-04-24 21:31:36.853953] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:11.392 [2024-04-24 21:31:36.853969] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:11.392 [2024-04-24 21:31:36.853976] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.853983] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15add00): datao=0, datal=1024, cccid=4 00:17:11.392 [2024-04-24 21:31:36.853991] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x160d440) on tqpair(0x15add00): expected_datao=0, payload_size=1024 00:17:11.392 [2024-04-24 21:31:36.853999] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.854009] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.854017] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.854026] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.392 [2024-04-24 21:31:36.854035] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.392 [2024-04-24 21:31:36.854042] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.854049] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160d5a0) on tqpair=0x15add00 00:17:11.392 [2024-04-24 21:31:36.894798] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.392 [2024-04-24 21:31:36.894818] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.392 [2024-04-24 21:31:36.894825] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.894833] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160d440) on tqpair=0x15add00 00:17:11.392 [2024-04-24 21:31:36.894852] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.894861] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15add00) 00:17:11.392 [2024-04-24 21:31:36.894873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.392 [2024-04-24 21:31:36.894907] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d440, cid 4, qid 0 00:17:11.392 [2024-04-24 21:31:36.895087] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:11.392 [2024-04-24 21:31:36.895100] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:11.392 [2024-04-24 21:31:36.895107] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.895114] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15add00): datao=0, datal=3072, cccid=4 00:17:11.392 [2024-04-24 21:31:36.895122] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x160d440) on tqpair(0x15add00): expected_datao=0, payload_size=3072 00:17:11.392 [2024-04-24 21:31:36.895130] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.895140] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.895148] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.895201] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.392 [2024-04-24 21:31:36.895212] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.392 [2024-04-24 21:31:36.895219] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.895226] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160d440) on tqpair=0x15add00 00:17:11.392 [2024-04-24 21:31:36.895242] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.392 [2024-04-24 21:31:36.895251] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15add00) 00:17:11.393 [2024-04-24 21:31:36.895262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.393 [2024-04-24 21:31:36.895290] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d440, cid 4, qid 0 00:17:11.393 [2024-04-24 21:31:36.895458] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:11.393 [2024-04-24 21:31:36.895470] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:11.393 [2024-04-24 21:31:36.895477] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:11.393 [2024-04-24 21:31:36.895483] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15add00): datao=0, datal=8, cccid=4 00:17:11.393 [2024-04-24 21:31:36.895491] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x160d440) on tqpair(0x15add00): expected_datao=0, payload_size=8 00:17:11.393 [2024-04-24 21:31:36.895499] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.393 [2024-04-24 21:31:36.895509] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:11.393 [2024-04-24 21:31:36.895516] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:11.393 [2024-04-24 21:31:36.938643] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.393 [2024-04-24 21:31:36.938661] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.393 [2024-04-24 21:31:36.938669] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.393 [2024-04-24 21:31:36.938690] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160d440) on tqpair=0x15add00 00:17:11.393 ===================================================== 00:17:11.393 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:11.393 ===================================================== 00:17:11.393 Controller Capabilities/Features 00:17:11.393 ================================ 00:17:11.393 Vendor ID: 0000 00:17:11.393 Subsystem Vendor ID: 0000 00:17:11.393 Serial Number: .................... 00:17:11.393 Model Number: ........................................ 00:17:11.393 Firmware Version: 24.05 00:17:11.393 Recommended Arb Burst: 0 00:17:11.393 IEEE OUI Identifier: 00 00 00 00:17:11.393 Multi-path I/O 00:17:11.393 May have multiple subsystem ports: No 00:17:11.393 May have multiple controllers: No 00:17:11.393 Associated with SR-IOV VF: No 00:17:11.393 Max Data Transfer Size: 131072 00:17:11.393 Max Number of Namespaces: 0 00:17:11.393 Max Number of I/O Queues: 1024 00:17:11.393 NVMe Specification Version (VS): 1.3 00:17:11.393 NVMe Specification Version (Identify): 1.3 00:17:11.393 Maximum Queue Entries: 128 00:17:11.393 Contiguous Queues Required: Yes 00:17:11.393 Arbitration Mechanisms Supported 00:17:11.393 Weighted Round Robin: Not Supported 00:17:11.393 Vendor Specific: Not Supported 00:17:11.393 Reset Timeout: 15000 ms 00:17:11.393 Doorbell Stride: 4 bytes 00:17:11.393 NVM Subsystem Reset: Not Supported 00:17:11.393 Command Sets Supported 00:17:11.393 NVM Command Set: Supported 00:17:11.393 Boot Partition: Not Supported 00:17:11.393 Memory Page Size Minimum: 4096 bytes 00:17:11.393 Memory Page Size Maximum: 4096 bytes 00:17:11.393 Persistent Memory Region: Not Supported 00:17:11.393 Optional Asynchronous Events Supported 00:17:11.393 Namespace Attribute Notices: Not Supported 00:17:11.393 Firmware Activation Notices: Not Supported 00:17:11.393 ANA Change Notices: Not Supported 00:17:11.393 PLE Aggregate Log Change Notices: Not Supported 00:17:11.393 LBA Status Info Alert Notices: Not Supported 00:17:11.393 EGE Aggregate Log Change Notices: Not Supported 00:17:11.393 Normal NVM Subsystem Shutdown event: Not Supported 00:17:11.393 Zone Descriptor Change Notices: Not Supported 00:17:11.393 Discovery Log Change Notices: Supported 00:17:11.393 Controller Attributes 00:17:11.393 128-bit Host Identifier: Not Supported 00:17:11.393 Non-Operational Permissive Mode: Not Supported 00:17:11.393 NVM Sets: Not Supported 00:17:11.393 Read Recovery Levels: Not Supported 00:17:11.393 Endurance Groups: Not Supported 00:17:11.393 Predictable Latency Mode: Not Supported 00:17:11.393 Traffic Based Keep ALive: Not Supported 00:17:11.393 Namespace Granularity: Not Supported 00:17:11.393 SQ Associations: Not Supported 00:17:11.393 UUID List: Not Supported 00:17:11.393 Multi-Domain Subsystem: Not Supported 00:17:11.393 Fixed Capacity Management: Not Supported 00:17:11.393 Variable Capacity Management: Not Supported 00:17:11.393 Delete Endurance Group: Not Supported 00:17:11.393 Delete NVM Set: Not Supported 00:17:11.393 Extended LBA Formats Supported: Not Supported 00:17:11.393 Flexible Data Placement Supported: Not Supported 00:17:11.393 00:17:11.393 Controller Memory Buffer Support 00:17:11.393 ================================ 00:17:11.393 Supported: No 00:17:11.393 00:17:11.393 Persistent Memory Region Support 00:17:11.393 ================================ 00:17:11.393 Supported: No 00:17:11.393 00:17:11.393 Admin Command Set Attributes 00:17:11.393 ============================ 00:17:11.393 Security Send/Receive: Not Supported 00:17:11.393 Format NVM: Not Supported 00:17:11.393 Firmware Activate/Download: Not Supported 00:17:11.393 Namespace Management: Not Supported 00:17:11.393 Device Self-Test: Not Supported 00:17:11.393 Directives: Not Supported 00:17:11.393 NVMe-MI: Not Supported 00:17:11.393 Virtualization Management: Not Supported 00:17:11.393 Doorbell Buffer Config: Not Supported 00:17:11.393 Get LBA Status Capability: Not Supported 00:17:11.393 Command & Feature Lockdown Capability: Not Supported 00:17:11.393 Abort Command Limit: 1 00:17:11.393 Async Event Request Limit: 4 00:17:11.393 Number of Firmware Slots: N/A 00:17:11.393 Firmware Slot 1 Read-Only: N/A 00:17:11.393 Firmware Activation Without Reset: N/A 00:17:11.393 Multiple Update Detection Support: N/A 00:17:11.393 Firmware Update Granularity: No Information Provided 00:17:11.393 Per-Namespace SMART Log: No 00:17:11.393 Asymmetric Namespace Access Log Page: Not Supported 00:17:11.393 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:11.393 Command Effects Log Page: Not Supported 00:17:11.393 Get Log Page Extended Data: Supported 00:17:11.393 Telemetry Log Pages: Not Supported 00:17:11.393 Persistent Event Log Pages: Not Supported 00:17:11.393 Supported Log Pages Log Page: May Support 00:17:11.393 Commands Supported & Effects Log Page: Not Supported 00:17:11.393 Feature Identifiers & Effects Log Page:May Support 00:17:11.393 NVMe-MI Commands & Effects Log Page: May Support 00:17:11.393 Data Area 4 for Telemetry Log: Not Supported 00:17:11.393 Error Log Page Entries Supported: 128 00:17:11.393 Keep Alive: Not Supported 00:17:11.393 00:17:11.393 NVM Command Set Attributes 00:17:11.393 ========================== 00:17:11.393 Submission Queue Entry Size 00:17:11.393 Max: 1 00:17:11.393 Min: 1 00:17:11.393 Completion Queue Entry Size 00:17:11.393 Max: 1 00:17:11.393 Min: 1 00:17:11.393 Number of Namespaces: 0 00:17:11.393 Compare Command: Not Supported 00:17:11.393 Write Uncorrectable Command: Not Supported 00:17:11.393 Dataset Management Command: Not Supported 00:17:11.393 Write Zeroes Command: Not Supported 00:17:11.393 Set Features Save Field: Not Supported 00:17:11.393 Reservations: Not Supported 00:17:11.393 Timestamp: Not Supported 00:17:11.393 Copy: Not Supported 00:17:11.393 Volatile Write Cache: Not Present 00:17:11.393 Atomic Write Unit (Normal): 1 00:17:11.393 Atomic Write Unit (PFail): 1 00:17:11.394 Atomic Compare & Write Unit: 1 00:17:11.394 Fused Compare & Write: Supported 00:17:11.394 Scatter-Gather List 00:17:11.394 SGL Command Set: Supported 00:17:11.394 SGL Keyed: Supported 00:17:11.394 SGL Bit Bucket Descriptor: Not Supported 00:17:11.394 SGL Metadata Pointer: Not Supported 00:17:11.394 Oversized SGL: Not Supported 00:17:11.394 SGL Metadata Address: Not Supported 00:17:11.394 SGL Offset: Supported 00:17:11.394 Transport SGL Data Block: Not Supported 00:17:11.394 Replay Protected Memory Block: Not Supported 00:17:11.394 00:17:11.394 Firmware Slot Information 00:17:11.394 ========================= 00:17:11.394 Active slot: 0 00:17:11.394 00:17:11.394 00:17:11.394 Error Log 00:17:11.394 ========= 00:17:11.394 00:17:11.394 Active Namespaces 00:17:11.394 ================= 00:17:11.394 Discovery Log Page 00:17:11.394 ================== 00:17:11.394 Generation Counter: 2 00:17:11.394 Number of Records: 2 00:17:11.394 Record Format: 0 00:17:11.394 00:17:11.394 Discovery Log Entry 0 00:17:11.394 ---------------------- 00:17:11.394 Transport Type: 3 (TCP) 00:17:11.394 Address Family: 1 (IPv4) 00:17:11.394 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:11.394 Entry Flags: 00:17:11.394 Duplicate Returned Information: 1 00:17:11.394 Explicit Persistent Connection Support for Discovery: 1 00:17:11.394 Transport Requirements: 00:17:11.394 Secure Channel: Not Required 00:17:11.394 Port ID: 0 (0x0000) 00:17:11.394 Controller ID: 65535 (0xffff) 00:17:11.394 Admin Max SQ Size: 128 00:17:11.394 Transport Service Identifier: 4420 00:17:11.394 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:11.394 Transport Address: 10.0.0.2 00:17:11.394 Discovery Log Entry 1 00:17:11.394 ---------------------- 00:17:11.394 Transport Type: 3 (TCP) 00:17:11.394 Address Family: 1 (IPv4) 00:17:11.394 Subsystem Type: 2 (NVM Subsystem) 00:17:11.394 Entry Flags: 00:17:11.394 Duplicate Returned Information: 0 00:17:11.394 Explicit Persistent Connection Support for Discovery: 0 00:17:11.394 Transport Requirements: 00:17:11.394 Secure Channel: Not Required 00:17:11.394 Port ID: 0 (0x0000) 00:17:11.394 Controller ID: 65535 (0xffff) 00:17:11.394 Admin Max SQ Size: 128 00:17:11.394 Transport Service Identifier: 4420 00:17:11.394 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:11.394 Transport Address: 10.0.0.2 [2024-04-24 21:31:36.938814] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:11.394 [2024-04-24 21:31:36.938839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.394 [2024-04-24 21:31:36.938852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.394 [2024-04-24 21:31:36.938862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.394 [2024-04-24 21:31:36.938871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.394 [2024-04-24 21:31:36.938885] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.394 [2024-04-24 21:31:36.938897] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.394 [2024-04-24 21:31:36.938904] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15add00) 00:17:11.394 [2024-04-24 21:31:36.938915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.394 [2024-04-24 21:31:36.938939] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d2e0, cid 3, qid 0 00:17:11.394 [2024-04-24 21:31:36.939091] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.394 [2024-04-24 21:31:36.939103] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.394 [2024-04-24 21:31:36.939111] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.394 [2024-04-24 21:31:36.939118] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160d2e0) on tqpair=0x15add00 00:17:11.394 [2024-04-24 21:31:36.939131] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.394 [2024-04-24 21:31:36.939138] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.394 [2024-04-24 21:31:36.939145] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15add00) 00:17:11.394 [2024-04-24 21:31:36.939156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.394 [2024-04-24 21:31:36.939182] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d2e0, cid 3, qid 0 00:17:11.394 [2024-04-24 21:31:36.939349] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.394 [2024-04-24 21:31:36.939361] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.394 [2024-04-24 21:31:36.939368] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.394 [2024-04-24 21:31:36.939375] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160d2e0) on tqpair=0x15add00 00:17:11.394 [2024-04-24 21:31:36.939385] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:11.394 [2024-04-24 21:31:36.939393] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:11.394 [2024-04-24 21:31:36.939409] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.394 [2024-04-24 21:31:36.939418] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.394 [2024-04-24 21:31:36.939424] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15add00) 00:17:11.394 [2024-04-24 21:31:36.939435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.394 [2024-04-24 21:31:36.939456] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d2e0, cid 3, qid 0 00:17:11.394 [2024-04-24 21:31:36.939650] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.394 [2024-04-24 21:31:36.939666] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.394 [2024-04-24 21:31:36.939673] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.394 [2024-04-24 21:31:36.939680] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160d2e0) on tqpair=0x15add00 00:17:11.394 [2024-04-24 21:31:36.939698] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.394 [2024-04-24 21:31:36.939707] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.394 [2024-04-24 21:31:36.939714] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15add00) 00:17:11.394 [2024-04-24 21:31:36.939725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.394 [2024-04-24 21:31:36.939745] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d2e0, cid 3, qid 0 00:17:11.394 [2024-04-24 21:31:36.939906] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.394 [2024-04-24 21:31:36.939918] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.394 [2024-04-24 21:31:36.939925] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.394 [2024-04-24 21:31:36.939936] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160d2e0) on tqpair=0x15add00 00:17:11.394 [2024-04-24 21:31:36.939954] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.394 [2024-04-24 21:31:36.939963] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.394 [2024-04-24 21:31:36.939970] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15add00) 00:17:11.394 [2024-04-24 21:31:36.939980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.394 [2024-04-24 21:31:36.940001] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d2e0, cid 3, qid 0 00:17:11.394 [2024-04-24 21:31:36.940158] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.394 [2024-04-24 21:31:36.940170] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.394 [2024-04-24 21:31:36.940177] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.940184] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160d2e0) on tqpair=0x15add00 00:17:11.395 [2024-04-24 21:31:36.940201] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.940210] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.940217] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15add00) 00:17:11.395 [2024-04-24 21:31:36.940227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.395 [2024-04-24 21:31:36.940247] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d2e0, cid 3, qid 0 00:17:11.395 [2024-04-24 21:31:36.940395] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.395 [2024-04-24 21:31:36.940406] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.395 [2024-04-24 21:31:36.940413] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.940420] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160d2e0) on tqpair=0x15add00 00:17:11.395 [2024-04-24 21:31:36.940437] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.940446] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.940453] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15add00) 00:17:11.395 [2024-04-24 21:31:36.940464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.395 [2024-04-24 21:31:36.940484] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d2e0, cid 3, qid 0 00:17:11.395 [2024-04-24 21:31:36.940637] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.395 [2024-04-24 21:31:36.940651] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.395 [2024-04-24 21:31:36.940658] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.940665] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160d2e0) on tqpair=0x15add00 00:17:11.395 [2024-04-24 21:31:36.940683] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.940692] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.940699] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15add00) 00:17:11.395 [2024-04-24 21:31:36.940709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.395 [2024-04-24 21:31:36.940731] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d2e0, cid 3, qid 0 00:17:11.395 [2024-04-24 21:31:36.940890] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.395 [2024-04-24 21:31:36.940905] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.395 [2024-04-24 21:31:36.940912] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.940919] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160d2e0) on tqpair=0x15add00 00:17:11.395 [2024-04-24 21:31:36.940941] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.940951] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.940958] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15add00) 00:17:11.395 [2024-04-24 21:31:36.940969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.395 [2024-04-24 21:31:36.940990] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d2e0, cid 3, qid 0 00:17:11.395 [2024-04-24 21:31:36.941149] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.395 [2024-04-24 21:31:36.941164] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.395 [2024-04-24 21:31:36.941171] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.941178] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160d2e0) on tqpair=0x15add00 00:17:11.395 [2024-04-24 21:31:36.941195] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.941204] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.941211] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15add00) 00:17:11.395 [2024-04-24 21:31:36.941222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.395 [2024-04-24 21:31:36.941242] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d2e0, cid 3, qid 0 00:17:11.395 [2024-04-24 21:31:36.941391] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.395 [2024-04-24 21:31:36.941406] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.395 [2024-04-24 21:31:36.941413] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.941420] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160d2e0) on tqpair=0x15add00 00:17:11.395 [2024-04-24 21:31:36.941437] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.941447] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.941453] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15add00) 00:17:11.395 [2024-04-24 21:31:36.941464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.395 [2024-04-24 21:31:36.941485] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d2e0, cid 3, qid 0 00:17:11.395 [2024-04-24 21:31:36.941644] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.395 [2024-04-24 21:31:36.941659] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.395 [2024-04-24 21:31:36.941667] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.941674] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160d2e0) on tqpair=0x15add00 00:17:11.395 [2024-04-24 21:31:36.941692] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.941701] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.941708] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15add00) 00:17:11.395 [2024-04-24 21:31:36.941718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.395 [2024-04-24 21:31:36.941739] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d2e0, cid 3, qid 0 00:17:11.395 [2024-04-24 21:31:36.941899] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.395 [2024-04-24 21:31:36.941914] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.395 [2024-04-24 21:31:36.941921] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.941928] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160d2e0) on tqpair=0x15add00 00:17:11.395 [2024-04-24 21:31:36.941950] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.941960] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.941967] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15add00) 00:17:11.395 [2024-04-24 21:31:36.941977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.395 [2024-04-24 21:31:36.941998] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d2e0, cid 3, qid 0 00:17:11.395 [2024-04-24 21:31:36.942164] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.395 [2024-04-24 21:31:36.942179] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.395 [2024-04-24 21:31:36.942186] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.942193] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160d2e0) on tqpair=0x15add00 00:17:11.395 [2024-04-24 21:31:36.942211] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.942220] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.395 [2024-04-24 21:31:36.942226] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15add00) 00:17:11.395 [2024-04-24 21:31:36.942237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.395 [2024-04-24 21:31:36.942258] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d2e0, cid 3, qid 0 00:17:11.395 [2024-04-24 21:31:36.942411] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.395 [2024-04-24 21:31:36.942423] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.396 [2024-04-24 21:31:36.942430] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.396 [2024-04-24 21:31:36.942437] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160d2e0) on tqpair=0x15add00 00:17:11.396 [2024-04-24 21:31:36.942454] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.396 [2024-04-24 21:31:36.942463] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.396 [2024-04-24 21:31:36.942470] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15add00) 00:17:11.396 [2024-04-24 21:31:36.942480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.396 [2024-04-24 21:31:36.942501] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d2e0, cid 3, qid 0 00:17:11.396 [2024-04-24 21:31:36.946639] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.396 [2024-04-24 21:31:36.946656] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.396 [2024-04-24 21:31:36.946664] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.396 [2024-04-24 21:31:36.946671] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160d2e0) on tqpair=0x15add00 00:17:11.396 [2024-04-24 21:31:36.946690] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.396 [2024-04-24 21:31:36.946700] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.396 [2024-04-24 21:31:36.946707] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15add00) 00:17:11.396 [2024-04-24 21:31:36.946718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.396 [2024-04-24 21:31:36.946740] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160d2e0, cid 3, qid 0 00:17:11.396 [2024-04-24 21:31:36.946930] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.396 [2024-04-24 21:31:36.946945] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.396 [2024-04-24 21:31:36.946952] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.396 [2024-04-24 21:31:36.946959] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160d2e0) on tqpair=0x15add00 00:17:11.396 [2024-04-24 21:31:36.946973] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:17:11.396 00:17:11.396 21:31:36 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:11.396 [2024-04-24 21:31:36.980597] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:17:11.396 [2024-04-24 21:31:36.980678] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2636371 ] 00:17:11.396 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.396 [2024-04-24 21:31:37.013417] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:11.396 [2024-04-24 21:31:37.013470] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:11.396 [2024-04-24 21:31:37.013479] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:11.396 [2024-04-24 21:31:37.013493] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:11.396 [2024-04-24 21:31:37.013504] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:11.396 [2024-04-24 21:31:37.013774] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:11.396 [2024-04-24 21:31:37.013815] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xe1bd00 0 00:17:11.396 [2024-04-24 21:31:37.024641] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:11.396 [2024-04-24 21:31:37.024660] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:11.396 [2024-04-24 21:31:37.024667] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:11.396 [2024-04-24 21:31:37.024688] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:11.396 [2024-04-24 21:31:37.024726] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.396 [2024-04-24 21:31:37.024738] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.396 [2024-04-24 21:31:37.024745] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1bd00) 00:17:11.396 [2024-04-24 21:31:37.024759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:11.396 [2024-04-24 21:31:37.024784] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7aec0, cid 0, qid 0 00:17:11.396 [2024-04-24 21:31:37.032643] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.396 [2024-04-24 21:31:37.032660] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.396 [2024-04-24 21:31:37.032667] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.396 [2024-04-24 21:31:37.032674] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7aec0) on tqpair=0xe1bd00 00:17:11.396 [2024-04-24 21:31:37.032692] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:11.396 [2024-04-24 21:31:37.032703] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:11.396 [2024-04-24 21:31:37.032713] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:11.396 [2024-04-24 21:31:37.032730] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.396 [2024-04-24 21:31:37.032739] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.396 [2024-04-24 21:31:37.032745] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1bd00) 00:17:11.396 [2024-04-24 21:31:37.032757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.396 [2024-04-24 21:31:37.032784] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7aec0, cid 0, qid 0 00:17:11.396 [2024-04-24 21:31:37.032975] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.396 [2024-04-24 21:31:37.032990] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.396 [2024-04-24 21:31:37.032997] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.396 [2024-04-24 21:31:37.033004] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7aec0) on tqpair=0xe1bd00 00:17:11.396 [2024-04-24 21:31:37.033012] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:11.396 [2024-04-24 21:31:37.033025] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:11.396 [2024-04-24 21:31:37.033037] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.396 [2024-04-24 21:31:37.033045] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.396 [2024-04-24 21:31:37.033051] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1bd00) 00:17:11.397 [2024-04-24 21:31:37.033061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.397 [2024-04-24 21:31:37.033082] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7aec0, cid 0, qid 0 00:17:11.397 [2024-04-24 21:31:37.033275] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.397 [2024-04-24 21:31:37.033290] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.397 [2024-04-24 21:31:37.033297] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.397 [2024-04-24 21:31:37.033304] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7aec0) on tqpair=0xe1bd00 00:17:11.397 [2024-04-24 21:31:37.033312] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:11.397 [2024-04-24 21:31:37.033326] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:11.397 [2024-04-24 21:31:37.033338] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.397 [2024-04-24 21:31:37.033345] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.397 [2024-04-24 21:31:37.033351] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1bd00) 00:17:11.397 [2024-04-24 21:31:37.033362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.397 [2024-04-24 21:31:37.033382] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7aec0, cid 0, qid 0 00:17:11.397 [2024-04-24 21:31:37.033547] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.397 [2024-04-24 21:31:37.033562] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.397 [2024-04-24 21:31:37.033569] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.397 [2024-04-24 21:31:37.033576] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7aec0) on tqpair=0xe1bd00 00:17:11.397 [2024-04-24 21:31:37.033584] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:11.397 [2024-04-24 21:31:37.033600] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.397 [2024-04-24 21:31:37.033609] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.397 [2024-04-24 21:31:37.033615] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1bd00) 00:17:11.397 [2024-04-24 21:31:37.033626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.397 [2024-04-24 21:31:37.033656] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7aec0, cid 0, qid 0 00:17:11.397 [2024-04-24 21:31:37.033845] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.397 [2024-04-24 21:31:37.033860] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.397 [2024-04-24 21:31:37.033870] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.397 [2024-04-24 21:31:37.033878] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7aec0) on tqpair=0xe1bd00 00:17:11.397 [2024-04-24 21:31:37.033885] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:11.397 [2024-04-24 21:31:37.033894] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:11.397 [2024-04-24 21:31:37.033907] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:11.397 [2024-04-24 21:31:37.034016] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:11.397 [2024-04-24 21:31:37.034024] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:11.397 [2024-04-24 21:31:37.034035] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.397 [2024-04-24 21:31:37.034043] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.397 [2024-04-24 21:31:37.034049] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1bd00) 00:17:11.397 [2024-04-24 21:31:37.034074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.397 [2024-04-24 21:31:37.034095] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7aec0, cid 0, qid 0 00:17:11.397 [2024-04-24 21:31:37.034305] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.397 [2024-04-24 21:31:37.034320] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.397 [2024-04-24 21:31:37.034327] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.397 [2024-04-24 21:31:37.034333] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7aec0) on tqpair=0xe1bd00 00:17:11.397 [2024-04-24 21:31:37.034342] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:11.397 [2024-04-24 21:31:37.034359] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.397 [2024-04-24 21:31:37.034367] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.397 [2024-04-24 21:31:37.034374] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1bd00) 00:17:11.397 [2024-04-24 21:31:37.034384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.397 [2024-04-24 21:31:37.034405] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7aec0, cid 0, qid 0 00:17:11.397 [2024-04-24 21:31:37.034571] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.397 [2024-04-24 21:31:37.034583] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.397 [2024-04-24 21:31:37.034590] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.397 [2024-04-24 21:31:37.034596] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7aec0) on tqpair=0xe1bd00 00:17:11.397 [2024-04-24 21:31:37.034604] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:11.397 [2024-04-24 21:31:37.034612] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:11.397 [2024-04-24 21:31:37.034625] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:11.397 [2024-04-24 21:31:37.034648] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:11.397 [2024-04-24 21:31:37.034664] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.397 [2024-04-24 21:31:37.034673] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1bd00) 00:17:11.397 [2024-04-24 21:31:37.034687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.397 [2024-04-24 21:31:37.034709] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7aec0, cid 0, qid 0 00:17:11.397 [2024-04-24 21:31:37.034942] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:11.397 [2024-04-24 21:31:37.034958] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:11.397 [2024-04-24 21:31:37.034964] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:11.397 [2024-04-24 21:31:37.034971] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe1bd00): datao=0, datal=4096, cccid=0 00:17:11.397 [2024-04-24 21:31:37.034978] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe7aec0) on tqpair(0xe1bd00): expected_datao=0, payload_size=4096 00:17:11.397 [2024-04-24 21:31:37.034986] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.397 [2024-04-24 21:31:37.034996] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:11.397 [2024-04-24 21:31:37.035004] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:11.397 [2024-04-24 21:31:37.035054] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.397 [2024-04-24 21:31:37.035065] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.397 [2024-04-24 21:31:37.035072] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.397 [2024-04-24 21:31:37.035079] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7aec0) on tqpair=0xe1bd00 00:17:11.397 [2024-04-24 21:31:37.035090] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:11.397 [2024-04-24 21:31:37.035098] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:11.397 [2024-04-24 21:31:37.035105] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:11.397 [2024-04-24 21:31:37.035112] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:11.397 [2024-04-24 21:31:37.035120] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:11.397 [2024-04-24 21:31:37.035128] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:11.397 [2024-04-24 21:31:37.035142] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:11.397 [2024-04-24 21:31:37.035153] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.397 [2024-04-24 21:31:37.035161] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.397 [2024-04-24 21:31:37.035168] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1bd00) 00:17:11.397 [2024-04-24 21:31:37.035179] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:11.397 [2024-04-24 21:31:37.035199] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7aec0, cid 0, qid 0 00:17:11.397 [2024-04-24 21:31:37.035397] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.397 [2024-04-24 21:31:37.035409] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.397 [2024-04-24 21:31:37.035415] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.035422] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7aec0) on tqpair=0xe1bd00 00:17:11.398 [2024-04-24 21:31:37.035432] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.035439] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.035445] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1bd00) 00:17:11.398 [2024-04-24 21:31:37.035455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.398 [2024-04-24 21:31:37.035469] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.035476] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.035483] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xe1bd00) 00:17:11.398 [2024-04-24 21:31:37.035491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.398 [2024-04-24 21:31:37.035501] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.035507] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.035514] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xe1bd00) 00:17:11.398 [2024-04-24 21:31:37.035522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.398 [2024-04-24 21:31:37.035532] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.035538] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.035544] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1bd00) 00:17:11.398 [2024-04-24 21:31:37.035553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.398 [2024-04-24 21:31:37.035561] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:11.398 [2024-04-24 21:31:37.035580] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:11.398 [2024-04-24 21:31:37.035592] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.035599] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe1bd00) 00:17:11.398 [2024-04-24 21:31:37.035625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.398 [2024-04-24 21:31:37.035656] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7aec0, cid 0, qid 0 00:17:11.398 [2024-04-24 21:31:37.035667] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b020, cid 1, qid 0 00:17:11.398 [2024-04-24 21:31:37.035690] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b180, cid 2, qid 0 00:17:11.398 [2024-04-24 21:31:37.035698] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b2e0, cid 3, qid 0 00:17:11.398 [2024-04-24 21:31:37.035705] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b440, cid 4, qid 0 00:17:11.398 [2024-04-24 21:31:37.035927] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.398 [2024-04-24 21:31:37.035943] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.398 [2024-04-24 21:31:37.035949] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.035956] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b440) on tqpair=0xe1bd00 00:17:11.398 [2024-04-24 21:31:37.035964] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:11.398 [2024-04-24 21:31:37.035973] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:11.398 [2024-04-24 21:31:37.035992] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:11.398 [2024-04-24 21:31:37.036003] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:11.398 [2024-04-24 21:31:37.036014] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.036022] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.036028] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe1bd00) 00:17:11.398 [2024-04-24 21:31:37.036042] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:11.398 [2024-04-24 21:31:37.036063] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b440, cid 4, qid 0 00:17:11.398 [2024-04-24 21:31:37.036266] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.398 [2024-04-24 21:31:37.036281] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.398 [2024-04-24 21:31:37.036287] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.036294] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b440) on tqpair=0xe1bd00 00:17:11.398 [2024-04-24 21:31:37.036347] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:11.398 [2024-04-24 21:31:37.036366] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:11.398 [2024-04-24 21:31:37.036380] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.036387] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe1bd00) 00:17:11.398 [2024-04-24 21:31:37.036398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.398 [2024-04-24 21:31:37.036419] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b440, cid 4, qid 0 00:17:11.398 [2024-04-24 21:31:37.036652] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:11.398 [2024-04-24 21:31:37.036668] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:11.398 [2024-04-24 21:31:37.036675] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.036681] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe1bd00): datao=0, datal=4096, cccid=4 00:17:11.398 [2024-04-24 21:31:37.036689] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe7b440) on tqpair(0xe1bd00): expected_datao=0, payload_size=4096 00:17:11.398 [2024-04-24 21:31:37.036696] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.036706] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.036713] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.036770] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.398 [2024-04-24 21:31:37.036782] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.398 [2024-04-24 21:31:37.036788] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.036795] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b440) on tqpair=0xe1bd00 00:17:11.398 [2024-04-24 21:31:37.036809] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:11.398 [2024-04-24 21:31:37.036830] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:11.398 [2024-04-24 21:31:37.036848] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:11.398 [2024-04-24 21:31:37.036862] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.036869] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe1bd00) 00:17:11.398 [2024-04-24 21:31:37.036880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.398 [2024-04-24 21:31:37.036902] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b440, cid 4, qid 0 00:17:11.398 [2024-04-24 21:31:37.037084] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:11.398 [2024-04-24 21:31:37.037096] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:11.398 [2024-04-24 21:31:37.037106] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.037113] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe1bd00): datao=0, datal=4096, cccid=4 00:17:11.398 [2024-04-24 21:31:37.037120] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe7b440) on tqpair(0xe1bd00): expected_datao=0, payload_size=4096 00:17:11.398 [2024-04-24 21:31:37.037128] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.037168] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.037177] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.037297] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.398 [2024-04-24 21:31:37.037311] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.398 [2024-04-24 21:31:37.037318] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.398 [2024-04-24 21:31:37.037324] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b440) on tqpair=0xe1bd00 00:17:11.399 [2024-04-24 21:31:37.037345] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:11.399 [2024-04-24 21:31:37.037363] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:11.399 [2024-04-24 21:31:37.037377] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.399 [2024-04-24 21:31:37.037385] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe1bd00) 00:17:11.399 [2024-04-24 21:31:37.037395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.399 [2024-04-24 21:31:37.037416] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b440, cid 4, qid 0 00:17:11.399 [2024-04-24 21:31:37.037591] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:11.399 [2024-04-24 21:31:37.037606] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:11.399 [2024-04-24 21:31:37.037613] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:11.399 [2024-04-24 21:31:37.037619] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe1bd00): datao=0, datal=4096, cccid=4 00:17:11.399 [2024-04-24 21:31:37.037636] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe7b440) on tqpair(0xe1bd00): expected_datao=0, payload_size=4096 00:17:11.399 [2024-04-24 21:31:37.037645] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.399 [2024-04-24 21:31:37.037656] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:11.399 [2024-04-24 21:31:37.037663] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:11.399 [2024-04-24 21:31:37.037703] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.399 [2024-04-24 21:31:37.037714] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.399 [2024-04-24 21:31:37.037721] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.399 [2024-04-24 21:31:37.037727] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b440) on tqpair=0xe1bd00 00:17:11.399 [2024-04-24 21:31:37.037741] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:11.399 [2024-04-24 21:31:37.037756] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:11.399 [2024-04-24 21:31:37.037773] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:11.399 [2024-04-24 21:31:37.037784] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:11.399 [2024-04-24 21:31:37.037792] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:11.399 [2024-04-24 21:31:37.037801] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:11.399 [2024-04-24 21:31:37.037812] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:11.399 [2024-04-24 21:31:37.037821] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:11.399 [2024-04-24 21:31:37.037840] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.399 [2024-04-24 21:31:37.037849] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe1bd00) 00:17:11.399 [2024-04-24 21:31:37.037860] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.399 [2024-04-24 21:31:37.037871] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.399 [2024-04-24 21:31:37.037877] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.399 [2024-04-24 21:31:37.037884] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe1bd00) 00:17:11.399 [2024-04-24 21:31:37.037893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.399 [2024-04-24 21:31:37.037932] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b440, cid 4, qid 0 00:17:11.399 [2024-04-24 21:31:37.037944] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b5a0, cid 5, qid 0 00:17:11.399 [2024-04-24 21:31:37.038183] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.399 [2024-04-24 21:31:37.038196] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.399 [2024-04-24 21:31:37.038202] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.399 [2024-04-24 21:31:37.038209] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b440) on tqpair=0xe1bd00 00:17:11.399 [2024-04-24 21:31:37.038220] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.399 [2024-04-24 21:31:37.038228] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.399 [2024-04-24 21:31:37.038235] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.399 [2024-04-24 21:31:37.038241] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b5a0) on tqpair=0xe1bd00 00:17:11.399 [2024-04-24 21:31:37.038257] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.399 [2024-04-24 21:31:37.038265] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe1bd00) 00:17:11.399 [2024-04-24 21:31:37.038276] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.399 [2024-04-24 21:31:37.038296] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b5a0, cid 5, qid 0 00:17:11.399 [2024-04-24 21:31:37.038489] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.399 [2024-04-24 21:31:37.038501] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.399 [2024-04-24 21:31:37.038507] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.399 [2024-04-24 21:31:37.038514] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b5a0) on tqpair=0xe1bd00 00:17:11.399 [2024-04-24 21:31:37.038529] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.399 [2024-04-24 21:31:37.038538] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe1bd00) 00:17:11.399 [2024-04-24 21:31:37.038548] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.399 [2024-04-24 21:31:37.038568] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b5a0, cid 5, qid 0 00:17:11.399 [2024-04-24 21:31:37.038733] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.399 [2024-04-24 21:31:37.038748] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.399 [2024-04-24 21:31:37.038755] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.399 [2024-04-24 21:31:37.038765] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b5a0) on tqpair=0xe1bd00 00:17:11.399 [2024-04-24 21:31:37.038782] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.399 [2024-04-24 21:31:37.038790] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe1bd00) 00:17:11.399 [2024-04-24 21:31:37.038800] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.399 [2024-04-24 21:31:37.038821] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b5a0, cid 5, qid 0 00:17:11.399 [2024-04-24 21:31:37.038982] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.399 [2024-04-24 21:31:37.038997] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.399 [2024-04-24 21:31:37.039003] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.399 [2024-04-24 21:31:37.039010] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b5a0) on tqpair=0xe1bd00 00:17:11.399 [2024-04-24 21:31:37.039030] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.399 [2024-04-24 21:31:37.039040] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe1bd00) 00:17:11.399 [2024-04-24 21:31:37.039051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.399 [2024-04-24 21:31:37.039063] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.399 [2024-04-24 21:31:37.039071] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe1bd00) 00:17:11.399 [2024-04-24 21:31:37.039080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.399 [2024-04-24 21:31:37.039092] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.399 [2024-04-24 21:31:37.039099] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xe1bd00) 00:17:11.399 [2024-04-24 21:31:37.039108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.399 [2024-04-24 21:31:37.039120] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.399 [2024-04-24 21:31:37.039127] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xe1bd00) 00:17:11.399 [2024-04-24 21:31:37.039137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.399 [2024-04-24 21:31:37.039158] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b5a0, cid 5, qid 0 00:17:11.399 [2024-04-24 21:31:37.039169] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b440, cid 4, qid 0 00:17:11.399 [2024-04-24 21:31:37.039176] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b700, cid 6, qid 0 00:17:11.399 [2024-04-24 21:31:37.039184] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b860, cid 7, qid 0 00:17:11.399 [2024-04-24 21:31:37.039450] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:11.400 [2024-04-24 21:31:37.039465] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:11.400 [2024-04-24 21:31:37.039471] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:11.400 [2024-04-24 21:31:37.039478] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe1bd00): datao=0, datal=8192, cccid=5 00:17:11.400 [2024-04-24 21:31:37.039485] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe7b5a0) on tqpair(0xe1bd00): expected_datao=0, payload_size=8192 00:17:11.400 [2024-04-24 21:31:37.039493] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.400 [2024-04-24 21:31:37.039616] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:11.400 [2024-04-24 21:31:37.039626] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:11.400 [2024-04-24 21:31:37.043651] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:11.400 [2024-04-24 21:31:37.043664] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:11.400 [2024-04-24 21:31:37.043672] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:11.400 [2024-04-24 21:31:37.043678] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe1bd00): datao=0, datal=512, cccid=4 00:17:11.400 [2024-04-24 21:31:37.043686] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe7b440) on tqpair(0xe1bd00): expected_datao=0, payload_size=512 00:17:11.400 [2024-04-24 21:31:37.043693] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.400 [2024-04-24 21:31:37.043702] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:11.400 [2024-04-24 21:31:37.043709] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:11.400 [2024-04-24 21:31:37.043718] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:11.400 [2024-04-24 21:31:37.043726] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:11.400 [2024-04-24 21:31:37.043732] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:11.400 [2024-04-24 21:31:37.043738] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe1bd00): datao=0, datal=512, cccid=6 00:17:11.400 [2024-04-24 21:31:37.043746] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe7b700) on tqpair(0xe1bd00): expected_datao=0, payload_size=512 00:17:11.400 [2024-04-24 21:31:37.043753] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.400 [2024-04-24 21:31:37.043762] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:11.400 [2024-04-24 21:31:37.043769] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:11.400 [2024-04-24 21:31:37.043777] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:11.400 [2024-04-24 21:31:37.043786] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:11.400 [2024-04-24 21:31:37.043792] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:11.400 [2024-04-24 21:31:37.043798] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe1bd00): datao=0, datal=4096, cccid=7 00:17:11.400 [2024-04-24 21:31:37.043805] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe7b860) on tqpair(0xe1bd00): expected_datao=0, payload_size=4096 00:17:11.400 [2024-04-24 21:31:37.043812] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.400 [2024-04-24 21:31:37.043822] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:11.400 [2024-04-24 21:31:37.043829] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:11.400 [2024-04-24 21:31:37.043841] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.400 [2024-04-24 21:31:37.043850] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.400 [2024-04-24 21:31:37.043857] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.400 [2024-04-24 21:31:37.043863] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b5a0) on tqpair=0xe1bd00 00:17:11.400 [2024-04-24 21:31:37.043883] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.400 [2024-04-24 21:31:37.043895] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.400 [2024-04-24 21:31:37.043901] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.400 [2024-04-24 21:31:37.043908] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b440) on tqpair=0xe1bd00 00:17:11.400 [2024-04-24 21:31:37.043935] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.400 [2024-04-24 21:31:37.043946] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.400 [2024-04-24 21:31:37.043953] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.400 [2024-04-24 21:31:37.043959] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b700) on tqpair=0xe1bd00 00:17:11.400 [2024-04-24 21:31:37.043970] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.400 [2024-04-24 21:31:37.043979] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.400 [2024-04-24 21:31:37.043985] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.400 [2024-04-24 21:31:37.043991] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b860) on tqpair=0xe1bd00 00:17:11.400 ===================================================== 00:17:11.400 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:11.400 ===================================================== 00:17:11.400 Controller Capabilities/Features 00:17:11.400 ================================ 00:17:11.400 Vendor ID: 8086 00:17:11.400 Subsystem Vendor ID: 8086 00:17:11.400 Serial Number: SPDK00000000000001 00:17:11.400 Model Number: SPDK bdev Controller 00:17:11.400 Firmware Version: 24.05 00:17:11.400 Recommended Arb Burst: 6 00:17:11.400 IEEE OUI Identifier: e4 d2 5c 00:17:11.400 Multi-path I/O 00:17:11.400 May have multiple subsystem ports: Yes 00:17:11.400 May have multiple controllers: Yes 00:17:11.400 Associated with SR-IOV VF: No 00:17:11.400 Max Data Transfer Size: 131072 00:17:11.400 Max Number of Namespaces: 32 00:17:11.400 Max Number of I/O Queues: 127 00:17:11.400 NVMe Specification Version (VS): 1.3 00:17:11.400 NVMe Specification Version (Identify): 1.3 00:17:11.400 Maximum Queue Entries: 128 00:17:11.400 Contiguous Queues Required: Yes 00:17:11.400 Arbitration Mechanisms Supported 00:17:11.400 Weighted Round Robin: Not Supported 00:17:11.400 Vendor Specific: Not Supported 00:17:11.400 Reset Timeout: 15000 ms 00:17:11.400 Doorbell Stride: 4 bytes 00:17:11.400 NVM Subsystem Reset: Not Supported 00:17:11.400 Command Sets Supported 00:17:11.400 NVM Command Set: Supported 00:17:11.400 Boot Partition: Not Supported 00:17:11.400 Memory Page Size Minimum: 4096 bytes 00:17:11.400 Memory Page Size Maximum: 4096 bytes 00:17:11.400 Persistent Memory Region: Not Supported 00:17:11.400 Optional Asynchronous Events Supported 00:17:11.400 Namespace Attribute Notices: Supported 00:17:11.400 Firmware Activation Notices: Not Supported 00:17:11.400 ANA Change Notices: Not Supported 00:17:11.400 PLE Aggregate Log Change Notices: Not Supported 00:17:11.400 LBA Status Info Alert Notices: Not Supported 00:17:11.400 EGE Aggregate Log Change Notices: Not Supported 00:17:11.400 Normal NVM Subsystem Shutdown event: Not Supported 00:17:11.400 Zone Descriptor Change Notices: Not Supported 00:17:11.400 Discovery Log Change Notices: Not Supported 00:17:11.400 Controller Attributes 00:17:11.400 128-bit Host Identifier: Supported 00:17:11.400 Non-Operational Permissive Mode: Not Supported 00:17:11.400 NVM Sets: Not Supported 00:17:11.400 Read Recovery Levels: Not Supported 00:17:11.400 Endurance Groups: Not Supported 00:17:11.400 Predictable Latency Mode: Not Supported 00:17:11.400 Traffic Based Keep ALive: Not Supported 00:17:11.400 Namespace Granularity: Not Supported 00:17:11.400 SQ Associations: Not Supported 00:17:11.400 UUID List: Not Supported 00:17:11.400 Multi-Domain Subsystem: Not Supported 00:17:11.400 Fixed Capacity Management: Not Supported 00:17:11.400 Variable Capacity Management: Not Supported 00:17:11.400 Delete Endurance Group: Not Supported 00:17:11.400 Delete NVM Set: Not Supported 00:17:11.400 Extended LBA Formats Supported: Not Supported 00:17:11.400 Flexible Data Placement Supported: Not Supported 00:17:11.400 00:17:11.400 Controller Memory Buffer Support 00:17:11.400 ================================ 00:17:11.400 Supported: No 00:17:11.400 00:17:11.400 Persistent Memory Region Support 00:17:11.400 ================================ 00:17:11.400 Supported: No 00:17:11.400 00:17:11.400 Admin Command Set Attributes 00:17:11.400 ============================ 00:17:11.400 Security Send/Receive: Not Supported 00:17:11.400 Format NVM: Not Supported 00:17:11.400 Firmware Activate/Download: Not Supported 00:17:11.400 Namespace Management: Not Supported 00:17:11.400 Device Self-Test: Not Supported 00:17:11.400 Directives: Not Supported 00:17:11.401 NVMe-MI: Not Supported 00:17:11.401 Virtualization Management: Not Supported 00:17:11.401 Doorbell Buffer Config: Not Supported 00:17:11.401 Get LBA Status Capability: Not Supported 00:17:11.401 Command & Feature Lockdown Capability: Not Supported 00:17:11.401 Abort Command Limit: 4 00:17:11.401 Async Event Request Limit: 4 00:17:11.401 Number of Firmware Slots: N/A 00:17:11.401 Firmware Slot 1 Read-Only: N/A 00:17:11.401 Firmware Activation Without Reset: N/A 00:17:11.401 Multiple Update Detection Support: N/A 00:17:11.401 Firmware Update Granularity: No Information Provided 00:17:11.401 Per-Namespace SMART Log: No 00:17:11.401 Asymmetric Namespace Access Log Page: Not Supported 00:17:11.401 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:11.401 Command Effects Log Page: Supported 00:17:11.401 Get Log Page Extended Data: Supported 00:17:11.401 Telemetry Log Pages: Not Supported 00:17:11.401 Persistent Event Log Pages: Not Supported 00:17:11.401 Supported Log Pages Log Page: May Support 00:17:11.401 Commands Supported & Effects Log Page: Not Supported 00:17:11.401 Feature Identifiers & Effects Log Page:May Support 00:17:11.401 NVMe-MI Commands & Effects Log Page: May Support 00:17:11.401 Data Area 4 for Telemetry Log: Not Supported 00:17:11.401 Error Log Page Entries Supported: 128 00:17:11.401 Keep Alive: Supported 00:17:11.401 Keep Alive Granularity: 10000 ms 00:17:11.401 00:17:11.401 NVM Command Set Attributes 00:17:11.401 ========================== 00:17:11.401 Submission Queue Entry Size 00:17:11.401 Max: 64 00:17:11.401 Min: 64 00:17:11.401 Completion Queue Entry Size 00:17:11.401 Max: 16 00:17:11.401 Min: 16 00:17:11.401 Number of Namespaces: 32 00:17:11.401 Compare Command: Supported 00:17:11.401 Write Uncorrectable Command: Not Supported 00:17:11.401 Dataset Management Command: Supported 00:17:11.401 Write Zeroes Command: Supported 00:17:11.401 Set Features Save Field: Not Supported 00:17:11.401 Reservations: Supported 00:17:11.401 Timestamp: Not Supported 00:17:11.401 Copy: Supported 00:17:11.401 Volatile Write Cache: Present 00:17:11.401 Atomic Write Unit (Normal): 1 00:17:11.401 Atomic Write Unit (PFail): 1 00:17:11.401 Atomic Compare & Write Unit: 1 00:17:11.401 Fused Compare & Write: Supported 00:17:11.401 Scatter-Gather List 00:17:11.401 SGL Command Set: Supported 00:17:11.401 SGL Keyed: Supported 00:17:11.401 SGL Bit Bucket Descriptor: Not Supported 00:17:11.401 SGL Metadata Pointer: Not Supported 00:17:11.401 Oversized SGL: Not Supported 00:17:11.401 SGL Metadata Address: Not Supported 00:17:11.401 SGL Offset: Supported 00:17:11.401 Transport SGL Data Block: Not Supported 00:17:11.401 Replay Protected Memory Block: Not Supported 00:17:11.401 00:17:11.401 Firmware Slot Information 00:17:11.401 ========================= 00:17:11.401 Active slot: 1 00:17:11.401 Slot 1 Firmware Revision: 24.05 00:17:11.401 00:17:11.401 00:17:11.401 Commands Supported and Effects 00:17:11.401 ============================== 00:17:11.401 Admin Commands 00:17:11.401 -------------- 00:17:11.401 Get Log Page (02h): Supported 00:17:11.401 Identify (06h): Supported 00:17:11.401 Abort (08h): Supported 00:17:11.401 Set Features (09h): Supported 00:17:11.401 Get Features (0Ah): Supported 00:17:11.401 Asynchronous Event Request (0Ch): Supported 00:17:11.401 Keep Alive (18h): Supported 00:17:11.401 I/O Commands 00:17:11.401 ------------ 00:17:11.401 Flush (00h): Supported LBA-Change 00:17:11.401 Write (01h): Supported LBA-Change 00:17:11.401 Read (02h): Supported 00:17:11.401 Compare (05h): Supported 00:17:11.401 Write Zeroes (08h): Supported LBA-Change 00:17:11.401 Dataset Management (09h): Supported LBA-Change 00:17:11.401 Copy (19h): Supported LBA-Change 00:17:11.401 Unknown (79h): Supported LBA-Change 00:17:11.401 Unknown (7Ah): Supported 00:17:11.401 00:17:11.401 Error Log 00:17:11.401 ========= 00:17:11.401 00:17:11.401 Arbitration 00:17:11.401 =========== 00:17:11.401 Arbitration Burst: 1 00:17:11.401 00:17:11.401 Power Management 00:17:11.401 ================ 00:17:11.401 Number of Power States: 1 00:17:11.401 Current Power State: Power State #0 00:17:11.401 Power State #0: 00:17:11.401 Max Power: 0.00 W 00:17:11.401 Non-Operational State: Operational 00:17:11.401 Entry Latency: Not Reported 00:17:11.401 Exit Latency: Not Reported 00:17:11.401 Relative Read Throughput: 0 00:17:11.401 Relative Read Latency: 0 00:17:11.401 Relative Write Throughput: 0 00:17:11.401 Relative Write Latency: 0 00:17:11.401 Idle Power: Not Reported 00:17:11.401 Active Power: Not Reported 00:17:11.401 Non-Operational Permissive Mode: Not Supported 00:17:11.401 00:17:11.401 Health Information 00:17:11.401 ================== 00:17:11.401 Critical Warnings: 00:17:11.401 Available Spare Space: OK 00:17:11.401 Temperature: OK 00:17:11.401 Device Reliability: OK 00:17:11.401 Read Only: No 00:17:11.401 Volatile Memory Backup: OK 00:17:11.401 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:11.401 Temperature Threshold: [2024-04-24 21:31:37.044116] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.401 [2024-04-24 21:31:37.044127] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xe1bd00) 00:17:11.401 [2024-04-24 21:31:37.044138] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.401 [2024-04-24 21:31:37.044161] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b860, cid 7, qid 0 00:17:11.401 [2024-04-24 21:31:37.044367] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.401 [2024-04-24 21:31:37.044383] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.401 [2024-04-24 21:31:37.044389] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.401 [2024-04-24 21:31:37.044396] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b860) on tqpair=0xe1bd00 00:17:11.401 [2024-04-24 21:31:37.044439] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:11.401 [2024-04-24 21:31:37.044459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.401 [2024-04-24 21:31:37.044471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.401 [2024-04-24 21:31:37.044480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.401 [2024-04-24 21:31:37.044490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.401 [2024-04-24 21:31:37.044502] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.401 [2024-04-24 21:31:37.044510] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.401 [2024-04-24 21:31:37.044516] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1bd00) 00:17:11.401 [2024-04-24 21:31:37.044527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.401 [2024-04-24 21:31:37.044549] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b2e0, cid 3, qid 0 00:17:11.401 [2024-04-24 21:31:37.044748] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.401 [2024-04-24 21:31:37.044764] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.401 [2024-04-24 21:31:37.044771] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.401 [2024-04-24 21:31:37.044777] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b2e0) on tqpair=0xe1bd00 00:17:11.401 [2024-04-24 21:31:37.044788] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.044796] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.044802] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1bd00) 00:17:11.402 [2024-04-24 21:31:37.044812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.402 [2024-04-24 21:31:37.044838] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b2e0, cid 3, qid 0 00:17:11.402 [2024-04-24 21:31:37.045007] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.402 [2024-04-24 21:31:37.045018] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.402 [2024-04-24 21:31:37.045025] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.045032] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b2e0) on tqpair=0xe1bd00 00:17:11.402 [2024-04-24 21:31:37.045039] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:11.402 [2024-04-24 21:31:37.045047] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:11.402 [2024-04-24 21:31:37.045062] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.045074] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.045081] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1bd00) 00:17:11.402 [2024-04-24 21:31:37.045092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.402 [2024-04-24 21:31:37.045111] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b2e0, cid 3, qid 0 00:17:11.402 [2024-04-24 21:31:37.045306] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.402 [2024-04-24 21:31:37.045317] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.402 [2024-04-24 21:31:37.045324] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.045330] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b2e0) on tqpair=0xe1bd00 00:17:11.402 [2024-04-24 21:31:37.045346] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.045354] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.045361] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1bd00) 00:17:11.402 [2024-04-24 21:31:37.045371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.402 [2024-04-24 21:31:37.045390] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b2e0, cid 3, qid 0 00:17:11.402 [2024-04-24 21:31:37.045545] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.402 [2024-04-24 21:31:37.045560] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.402 [2024-04-24 21:31:37.045567] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.045574] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b2e0) on tqpair=0xe1bd00 00:17:11.402 [2024-04-24 21:31:37.045590] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.045598] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.045605] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1bd00) 00:17:11.402 [2024-04-24 21:31:37.045615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.402 [2024-04-24 21:31:37.045642] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b2e0, cid 3, qid 0 00:17:11.402 [2024-04-24 21:31:37.045793] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.402 [2024-04-24 21:31:37.045808] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.402 [2024-04-24 21:31:37.045814] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.045821] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b2e0) on tqpair=0xe1bd00 00:17:11.402 [2024-04-24 21:31:37.045837] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.045846] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.045852] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1bd00) 00:17:11.402 [2024-04-24 21:31:37.045863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.402 [2024-04-24 21:31:37.045883] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b2e0, cid 3, qid 0 00:17:11.402 [2024-04-24 21:31:37.046033] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.402 [2024-04-24 21:31:37.046048] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.402 [2024-04-24 21:31:37.046054] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.046061] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b2e0) on tqpair=0xe1bd00 00:17:11.402 [2024-04-24 21:31:37.046077] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.046085] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.046095] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1bd00) 00:17:11.402 [2024-04-24 21:31:37.046107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.402 [2024-04-24 21:31:37.046126] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b2e0, cid 3, qid 0 00:17:11.402 [2024-04-24 21:31:37.046273] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.402 [2024-04-24 21:31:37.046284] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.402 [2024-04-24 21:31:37.046291] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.046297] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b2e0) on tqpair=0xe1bd00 00:17:11.402 [2024-04-24 21:31:37.046312] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.046321] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.046328] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1bd00) 00:17:11.402 [2024-04-24 21:31:37.046338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.402 [2024-04-24 21:31:37.046357] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b2e0, cid 3, qid 0 00:17:11.402 [2024-04-24 21:31:37.046502] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.402 [2024-04-24 21:31:37.046514] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.402 [2024-04-24 21:31:37.046521] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.046527] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b2e0) on tqpair=0xe1bd00 00:17:11.402 [2024-04-24 21:31:37.046542] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.046551] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.046557] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1bd00) 00:17:11.402 [2024-04-24 21:31:37.046567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.402 [2024-04-24 21:31:37.046587] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b2e0, cid 3, qid 0 00:17:11.402 [2024-04-24 21:31:37.046753] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.402 [2024-04-24 21:31:37.046768] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.402 [2024-04-24 21:31:37.046775] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.046781] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b2e0) on tqpair=0xe1bd00 00:17:11.402 [2024-04-24 21:31:37.046797] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.046806] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.046813] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1bd00) 00:17:11.402 [2024-04-24 21:31:37.046823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.402 [2024-04-24 21:31:37.046844] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b2e0, cid 3, qid 0 00:17:11.402 [2024-04-24 21:31:37.047010] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.402 [2024-04-24 21:31:37.047025] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.402 [2024-04-24 21:31:37.047032] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.047039] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b2e0) on tqpair=0xe1bd00 00:17:11.402 [2024-04-24 21:31:37.047055] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.047063] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.047070] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1bd00) 00:17:11.402 [2024-04-24 21:31:37.047084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.402 [2024-04-24 21:31:37.047105] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b2e0, cid 3, qid 0 00:17:11.402 [2024-04-24 21:31:37.047270] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.402 [2024-04-24 21:31:37.047285] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.402 [2024-04-24 21:31:37.047291] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.047298] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b2e0) on tqpair=0xe1bd00 00:17:11.402 [2024-04-24 21:31:37.047314] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.047322] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.047329] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1bd00) 00:17:11.402 [2024-04-24 21:31:37.047339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.402 [2024-04-24 21:31:37.047359] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b2e0, cid 3, qid 0 00:17:11.402 [2024-04-24 21:31:37.047508] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.402 [2024-04-24 21:31:37.047523] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.402 [2024-04-24 21:31:37.047530] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.047536] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b2e0) on tqpair=0xe1bd00 00:17:11.402 [2024-04-24 21:31:37.047552] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.047561] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.402 [2024-04-24 21:31:37.047568] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1bd00) 00:17:11.403 [2024-04-24 21:31:37.047578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.403 [2024-04-24 21:31:37.047598] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b2e0, cid 3, qid 0 00:17:11.403 [2024-04-24 21:31:37.051640] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.403 [2024-04-24 21:31:37.051656] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.403 [2024-04-24 21:31:37.051663] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.403 [2024-04-24 21:31:37.051669] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b2e0) on tqpair=0xe1bd00 00:17:11.403 [2024-04-24 21:31:37.051687] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:11.403 [2024-04-24 21:31:37.051711] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:11.403 [2024-04-24 21:31:37.051718] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1bd00) 00:17:11.403 [2024-04-24 21:31:37.051729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.403 [2024-04-24 21:31:37.051751] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe7b2e0, cid 3, qid 0 00:17:11.403 [2024-04-24 21:31:37.051936] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:11.403 [2024-04-24 21:31:37.051951] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:11.403 [2024-04-24 21:31:37.051958] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:11.403 [2024-04-24 21:31:37.051965] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe7b2e0) on tqpair=0xe1bd00 00:17:11.403 [2024-04-24 21:31:37.051978] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:17:11.662 0 Kelvin (-273 Celsius) 00:17:11.662 Available Spare: 0% 00:17:11.662 Available Spare Threshold: 0% 00:17:11.662 Life Percentage Used: 0% 00:17:11.662 Data Units Read: 0 00:17:11.662 Data Units Written: 0 00:17:11.662 Host Read Commands: 0 00:17:11.662 Host Write Commands: 0 00:17:11.662 Controller Busy Time: 0 minutes 00:17:11.662 Power Cycles: 0 00:17:11.662 Power On Hours: 0 hours 00:17:11.662 Unsafe Shutdowns: 0 00:17:11.662 Unrecoverable Media Errors: 0 00:17:11.662 Lifetime Error Log Entries: 0 00:17:11.662 Warning Temperature Time: 0 minutes 00:17:11.662 Critical Temperature Time: 0 minutes 00:17:11.662 00:17:11.662 Number of Queues 00:17:11.662 ================ 00:17:11.662 Number of I/O Submission Queues: 127 00:17:11.662 Number of I/O Completion Queues: 127 00:17:11.662 00:17:11.662 Active Namespaces 00:17:11.662 ================= 00:17:11.662 Namespace ID:1 00:17:11.662 Error Recovery Timeout: Unlimited 00:17:11.662 Command Set Identifier: NVM (00h) 00:17:11.662 Deallocate: Supported 00:17:11.662 Deallocated/Unwritten Error: Not Supported 00:17:11.662 Deallocated Read Value: Unknown 00:17:11.662 Deallocate in Write Zeroes: Not Supported 00:17:11.662 Deallocated Guard Field: 0xFFFF 00:17:11.662 Flush: Supported 00:17:11.662 Reservation: Supported 00:17:11.662 Namespace Sharing Capabilities: Multiple Controllers 00:17:11.662 Size (in LBAs): 131072 (0GiB) 00:17:11.662 Capacity (in LBAs): 131072 (0GiB) 00:17:11.662 Utilization (in LBAs): 131072 (0GiB) 00:17:11.662 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:11.662 EUI64: ABCDEF0123456789 00:17:11.662 UUID: 235537d4-21ed-411e-864d-b4ee3e479562 00:17:11.662 Thin Provisioning: Not Supported 00:17:11.662 Per-NS Atomic Units: Yes 00:17:11.662 Atomic Boundary Size (Normal): 0 00:17:11.662 Atomic Boundary Size (PFail): 0 00:17:11.662 Atomic Boundary Offset: 0 00:17:11.662 Maximum Single Source Range Length: 65535 00:17:11.662 Maximum Copy Length: 65535 00:17:11.662 Maximum Source Range Count: 1 00:17:11.662 NGUID/EUI64 Never Reused: No 00:17:11.662 Namespace Write Protected: No 00:17:11.662 Number of LBA Formats: 1 00:17:11.662 Current LBA Format: LBA Format #00 00:17:11.662 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:11.662 00:17:11.662 21:31:37 -- host/identify.sh@51 -- # sync 00:17:11.662 21:31:37 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:11.662 21:31:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.662 21:31:37 -- common/autotest_common.sh@10 -- # set +x 00:17:11.662 21:31:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.662 21:31:37 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:11.662 21:31:37 -- host/identify.sh@56 -- # nvmftestfini 00:17:11.662 21:31:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:11.662 21:31:37 -- nvmf/common.sh@117 -- # sync 00:17:11.662 21:31:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:11.662 21:31:37 -- nvmf/common.sh@120 -- # set +e 00:17:11.662 21:31:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:11.662 21:31:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:11.662 rmmod nvme_tcp 00:17:11.662 rmmod nvme_fabrics 00:17:11.662 rmmod nvme_keyring 00:17:11.662 21:31:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:11.662 21:31:37 -- nvmf/common.sh@124 -- # set -e 00:17:11.662 21:31:37 -- nvmf/common.sh@125 -- # return 0 00:17:11.662 21:31:37 -- nvmf/common.sh@478 -- # '[' -n 2636219 ']' 00:17:11.662 21:31:37 -- nvmf/common.sh@479 -- # killprocess 2636219 00:17:11.662 21:31:37 -- common/autotest_common.sh@936 -- # '[' -z 2636219 ']' 00:17:11.662 21:31:37 -- common/autotest_common.sh@940 -- # kill -0 2636219 00:17:11.662 21:31:37 -- common/autotest_common.sh@941 -- # uname 00:17:11.662 21:31:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:11.662 21:31:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2636219 00:17:11.662 21:31:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:11.662 21:31:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:11.662 21:31:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2636219' 00:17:11.662 killing process with pid 2636219 00:17:11.662 21:31:37 -- common/autotest_common.sh@955 -- # kill 2636219 00:17:11.662 [2024-04-24 21:31:37.162802] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:17:11.662 21:31:37 -- common/autotest_common.sh@960 -- # wait 2636219 00:17:11.921 21:31:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:11.921 21:31:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:11.921 21:31:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:11.921 21:31:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:11.921 21:31:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:11.921 21:31:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.921 21:31:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.921 21:31:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.869 21:31:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:13.869 00:17:13.869 real 0m5.432s 00:17:13.869 user 0m4.300s 00:17:13.869 sys 0m1.847s 00:17:13.869 21:31:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:13.869 21:31:39 -- common/autotest_common.sh@10 -- # set +x 00:17:13.869 ************************************ 00:17:13.869 END TEST nvmf_identify 00:17:13.869 ************************************ 00:17:13.869 21:31:39 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:13.869 21:31:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:13.869 21:31:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:13.869 21:31:39 -- common/autotest_common.sh@10 -- # set +x 00:17:14.127 ************************************ 00:17:14.127 START TEST nvmf_perf 00:17:14.128 ************************************ 00:17:14.128 21:31:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:14.128 * Looking for test storage... 00:17:14.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:14.128 21:31:39 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.128 21:31:39 -- nvmf/common.sh@7 -- # uname -s 00:17:14.128 21:31:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.128 21:31:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.128 21:31:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.128 21:31:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.128 21:31:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.128 21:31:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.128 21:31:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.128 21:31:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.128 21:31:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.128 21:31:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.128 21:31:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.128 21:31:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.128 21:31:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.128 21:31:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.128 21:31:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:14.128 21:31:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.128 21:31:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:14.128 21:31:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.128 21:31:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.128 21:31:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.128 21:31:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.128 21:31:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.128 21:31:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.128 21:31:39 -- paths/export.sh@5 -- # export PATH 00:17:14.128 21:31:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.128 21:31:39 -- nvmf/common.sh@47 -- # : 0 00:17:14.128 21:31:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:14.128 21:31:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:14.128 21:31:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.128 21:31:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.128 21:31:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.128 21:31:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:14.128 21:31:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:14.128 21:31:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:14.128 21:31:39 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:14.128 21:31:39 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:14.128 21:31:39 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:14.128 21:31:39 -- host/perf.sh@17 -- # nvmftestinit 00:17:14.128 21:31:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:14.128 21:31:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.128 21:31:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:14.128 21:31:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:14.128 21:31:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:14.128 21:31:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.128 21:31:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.128 21:31:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.128 21:31:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:14.128 21:31:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:14.128 21:31:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:14.128 21:31:39 -- common/autotest_common.sh@10 -- # set +x 00:17:16.031 21:31:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:16.031 21:31:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:16.031 21:31:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:16.031 21:31:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:16.031 21:31:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:16.031 21:31:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:16.031 21:31:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:16.031 21:31:41 -- nvmf/common.sh@295 -- # net_devs=() 00:17:16.031 21:31:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:16.031 21:31:41 -- nvmf/common.sh@296 -- # e810=() 00:17:16.031 21:31:41 -- nvmf/common.sh@296 -- # local -ga e810 00:17:16.031 21:31:41 -- nvmf/common.sh@297 -- # x722=() 00:17:16.031 21:31:41 -- nvmf/common.sh@297 -- # local -ga x722 00:17:16.031 21:31:41 -- nvmf/common.sh@298 -- # mlx=() 00:17:16.031 21:31:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:16.031 21:31:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.031 21:31:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.031 21:31:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.031 21:31:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.031 21:31:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.031 21:31:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.031 21:31:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.031 21:31:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.031 21:31:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.031 21:31:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.031 21:31:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.031 21:31:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:16.031 21:31:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:16.031 21:31:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:16.031 21:31:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:16.031 21:31:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:16.031 21:31:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:16.031 21:31:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.031 21:31:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:16.031 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:16.031 21:31:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.031 21:31:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.031 21:31:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.031 21:31:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.031 21:31:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.031 21:31:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.031 21:31:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:16.031 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:16.031 21:31:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.031 21:31:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.031 21:31:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.031 21:31:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.031 21:31:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.031 21:31:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:16.031 21:31:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:16.031 21:31:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:16.031 21:31:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.031 21:31:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.031 21:31:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:16.031 21:31:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.031 21:31:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:16.031 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:16.031 21:31:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.031 21:31:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.031 21:31:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.031 21:31:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:16.031 21:31:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.031 21:31:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:16.031 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:16.031 21:31:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.031 21:31:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:16.031 21:31:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:16.031 21:31:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:16.031 21:31:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:16.031 21:31:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:16.031 21:31:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.031 21:31:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.031 21:31:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:16.031 21:31:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:16.031 21:31:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:16.031 21:31:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:16.031 21:31:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:16.031 21:31:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:16.031 21:31:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.031 21:31:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:16.031 21:31:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:16.031 21:31:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:16.031 21:31:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.031 21:31:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.031 21:31:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.031 21:31:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:16.031 21:31:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:16.031 21:31:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:16.031 21:31:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:16.290 21:31:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:16.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:17:16.290 00:17:16.290 --- 10.0.0.2 ping statistics --- 00:17:16.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.290 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:17:16.290 21:31:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:17:16.290 00:17:16.290 --- 10.0.0.1 ping statistics --- 00:17:16.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.290 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:17:16.290 21:31:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.290 21:31:41 -- nvmf/common.sh@411 -- # return 0 00:17:16.290 21:31:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:16.290 21:31:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.290 21:31:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:16.290 21:31:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:16.290 21:31:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.290 21:31:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:16.290 21:31:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:16.290 21:31:41 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:16.290 21:31:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:16.290 21:31:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:16.290 21:31:41 -- common/autotest_common.sh@10 -- # set +x 00:17:16.290 21:31:41 -- nvmf/common.sh@470 -- # nvmfpid=2638309 00:17:16.290 21:31:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:16.290 21:31:41 -- nvmf/common.sh@471 -- # waitforlisten 2638309 00:17:16.290 21:31:41 -- common/autotest_common.sh@817 -- # '[' -z 2638309 ']' 00:17:16.290 21:31:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.290 21:31:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:16.290 21:31:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.290 21:31:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:16.290 21:31:41 -- common/autotest_common.sh@10 -- # set +x 00:17:16.290 [2024-04-24 21:31:41.793237] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:17:16.290 [2024-04-24 21:31:41.793320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.290 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.290 [2024-04-24 21:31:41.862935] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:16.549 [2024-04-24 21:31:41.978144] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.549 [2024-04-24 21:31:41.978217] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.549 [2024-04-24 21:31:41.978232] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.549 [2024-04-24 21:31:41.978244] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.549 [2024-04-24 21:31:41.978255] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.549 [2024-04-24 21:31:41.978320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.549 [2024-04-24 21:31:41.978398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:16.549 [2024-04-24 21:31:41.978401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.549 [2024-04-24 21:31:41.978350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.115 21:31:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:17.115 21:31:42 -- common/autotest_common.sh@850 -- # return 0 00:17:17.115 21:31:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:17.115 21:31:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:17.115 21:31:42 -- common/autotest_common.sh@10 -- # set +x 00:17:17.115 21:31:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.115 21:31:42 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:17.115 21:31:42 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:17:20.395 21:31:45 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:17:20.395 21:31:45 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:20.395 21:31:46 -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:17:20.395 21:31:46 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:20.653 21:31:46 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:20.653 21:31:46 -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:17:20.653 21:31:46 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:20.653 21:31:46 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:20.653 21:31:46 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:20.911 [2024-04-24 21:31:46.517440] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.911 21:31:46 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:21.168 21:31:46 -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:21.168 21:31:46 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:21.425 21:31:47 -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:21.425 21:31:47 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:21.683 21:31:47 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:21.941 [2024-04-24 21:31:47.489060] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.941 21:31:47 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:22.199 21:31:47 -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:17:22.199 21:31:47 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:17:22.199 21:31:47 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:22.199 21:31:47 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:17:23.650 Initializing NVMe Controllers 00:17:23.650 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:17:23.650 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:17:23.650 Initialization complete. Launching workers. 00:17:23.650 ======================================================== 00:17:23.650 Latency(us) 00:17:23.650 Device Information : IOPS MiB/s Average min max 00:17:23.650 PCIE (0000:88:00.0) NSID 1 from core 0: 85175.62 332.72 375.15 43.00 7256.52 00:17:23.650 ======================================================== 00:17:23.650 Total : 85175.62 332.72 375.15 43.00 7256.52 00:17:23.650 00:17:23.650 21:31:48 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:23.650 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.023 Initializing NVMe Controllers 00:17:25.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:25.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:25.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:25.023 Initialization complete. Launching workers. 00:17:25.023 ======================================================== 00:17:25.023 Latency(us) 00:17:25.023 Device Information : IOPS MiB/s Average min max 00:17:25.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 58.00 0.23 17907.01 219.28 45875.31 00:17:25.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15218.98 7921.70 47892.76 00:17:25.023 ======================================================== 00:17:25.023 Total : 124.00 0.48 16476.28 219.28 47892.76 00:17:25.023 00:17:25.023 21:31:50 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:25.023 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.396 Initializing NVMe Controllers 00:17:26.396 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:26.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:26.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:26.396 Initialization complete. Launching workers. 00:17:26.396 ======================================================== 00:17:26.396 Latency(us) 00:17:26.396 Device Information : IOPS MiB/s Average min max 00:17:26.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8314.00 32.48 3860.91 588.31 7949.72 00:17:26.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3868.00 15.11 8309.46 6581.63 16014.88 00:17:26.396 ======================================================== 00:17:26.396 Total : 12182.00 47.59 5273.40 588.31 16014.88 00:17:26.396 00:17:26.396 21:31:51 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:17:26.396 21:31:51 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:17:26.396 21:31:51 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:26.396 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.936 Initializing NVMe Controllers 00:17:28.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:28.936 Controller IO queue size 128, less than required. 00:17:28.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:28.936 Controller IO queue size 128, less than required. 00:17:28.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:28.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:28.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:28.936 Initialization complete. Launching workers. 00:17:28.936 ======================================================== 00:17:28.936 Latency(us) 00:17:28.936 Device Information : IOPS MiB/s Average min max 00:17:28.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 875.50 218.87 151937.20 88693.58 200547.32 00:17:28.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 561.00 140.25 234932.32 142785.40 348953.78 00:17:28.936 ======================================================== 00:17:28.936 Total : 1436.49 359.12 184349.50 88693.58 348953.78 00:17:28.936 00:17:28.936 21:31:54 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:17:28.936 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.937 No valid NVMe controllers or AIO or URING devices found 00:17:28.937 Initializing NVMe Controllers 00:17:28.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:28.937 Controller IO queue size 128, less than required. 00:17:28.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:28.937 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:28.937 Controller IO queue size 128, less than required. 00:17:28.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:28.937 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:17:28.937 WARNING: Some requested NVMe devices were skipped 00:17:28.937 21:31:54 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:17:28.937 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.270 Initializing NVMe Controllers 00:17:32.270 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:32.270 Controller IO queue size 128, less than required. 00:17:32.270 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:32.270 Controller IO queue size 128, less than required. 00:17:32.270 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:32.270 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:32.270 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:32.270 Initialization complete. Launching workers. 00:17:32.270 00:17:32.271 ==================== 00:17:32.271 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:32.271 TCP transport: 00:17:32.271 polls: 36030 00:17:32.271 idle_polls: 10459 00:17:32.271 sock_completions: 25571 00:17:32.271 nvme_completions: 3635 00:17:32.271 submitted_requests: 5444 00:17:32.271 queued_requests: 1 00:17:32.271 00:17:32.271 ==================== 00:17:32.271 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:32.271 TCP transport: 00:17:32.271 polls: 42863 00:17:32.271 idle_polls: 13911 00:17:32.271 sock_completions: 28952 00:17:32.271 nvme_completions: 3039 00:17:32.271 submitted_requests: 4574 00:17:32.271 queued_requests: 1 00:17:32.271 ======================================================== 00:17:32.271 Latency(us) 00:17:32.271 Device Information : IOPS MiB/s Average min max 00:17:32.271 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 907.53 226.88 143914.11 77011.34 236205.97 00:17:32.271 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 758.69 189.67 173466.32 63730.17 218874.50 00:17:32.271 ======================================================== 00:17:32.271 Total : 1666.22 416.56 157370.29 63730.17 236205.97 00:17:32.271 00:17:32.271 21:31:57 -- host/perf.sh@66 -- # sync 00:17:32.271 21:31:57 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.271 21:31:57 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:32.271 21:31:57 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:32.271 21:31:57 -- host/perf.sh@114 -- # nvmftestfini 00:17:32.271 21:31:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:32.271 21:31:57 -- nvmf/common.sh@117 -- # sync 00:17:32.271 21:31:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:32.271 21:31:57 -- nvmf/common.sh@120 -- # set +e 00:17:32.271 21:31:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:32.271 21:31:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:32.271 rmmod nvme_tcp 00:17:32.271 rmmod nvme_fabrics 00:17:32.271 rmmod nvme_keyring 00:17:32.271 21:31:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:32.271 21:31:57 -- nvmf/common.sh@124 -- # set -e 00:17:32.271 21:31:57 -- nvmf/common.sh@125 -- # return 0 00:17:32.271 21:31:57 -- nvmf/common.sh@478 -- # '[' -n 2638309 ']' 00:17:32.271 21:31:57 -- nvmf/common.sh@479 -- # killprocess 2638309 00:17:32.271 21:31:57 -- common/autotest_common.sh@936 -- # '[' -z 2638309 ']' 00:17:32.271 21:31:57 -- common/autotest_common.sh@940 -- # kill -0 2638309 00:17:32.271 21:31:57 -- common/autotest_common.sh@941 -- # uname 00:17:32.271 21:31:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:32.271 21:31:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2638309 00:17:32.271 21:31:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:32.271 21:31:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:32.271 21:31:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2638309' 00:17:32.271 killing process with pid 2638309 00:17:32.271 21:31:57 -- common/autotest_common.sh@955 -- # kill 2638309 00:17:32.271 21:31:57 -- common/autotest_common.sh@960 -- # wait 2638309 00:17:33.643 21:31:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:33.643 21:31:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:33.643 21:31:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:33.643 21:31:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:33.643 21:31:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:33.643 21:31:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.643 21:31:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.643 21:31:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.179 21:32:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:36.179 00:17:36.179 real 0m21.618s 00:17:36.179 user 1m7.137s 00:17:36.179 sys 0m4.933s 00:17:36.179 21:32:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:36.179 21:32:01 -- common/autotest_common.sh@10 -- # set +x 00:17:36.179 ************************************ 00:17:36.179 END TEST nvmf_perf 00:17:36.179 ************************************ 00:17:36.179 21:32:01 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:36.179 21:32:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:36.179 21:32:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:36.179 21:32:01 -- common/autotest_common.sh@10 -- # set +x 00:17:36.179 ************************************ 00:17:36.179 START TEST nvmf_fio_host 00:17:36.179 ************************************ 00:17:36.179 21:32:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:36.179 * Looking for test storage... 00:17:36.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:36.179 21:32:01 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.179 21:32:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.179 21:32:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.179 21:32:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.179 21:32:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.179 21:32:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.179 21:32:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.179 21:32:01 -- paths/export.sh@5 -- # export PATH 00:17:36.179 21:32:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.179 21:32:01 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.179 21:32:01 -- nvmf/common.sh@7 -- # uname -s 00:17:36.179 21:32:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.179 21:32:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.179 21:32:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.179 21:32:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.179 21:32:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.179 21:32:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.179 21:32:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.179 21:32:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.179 21:32:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.179 21:32:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.179 21:32:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:36.179 21:32:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:36.179 21:32:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.179 21:32:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.179 21:32:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.179 21:32:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.179 21:32:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.179 21:32:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.179 21:32:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.179 21:32:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.179 21:32:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.179 21:32:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.179 21:32:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.179 21:32:01 -- paths/export.sh@5 -- # export PATH 00:17:36.179 21:32:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.179 21:32:01 -- nvmf/common.sh@47 -- # : 0 00:17:36.179 21:32:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:36.179 21:32:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:36.179 21:32:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.179 21:32:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.179 21:32:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.179 21:32:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:36.179 21:32:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:36.179 21:32:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:36.179 21:32:01 -- host/fio.sh@12 -- # nvmftestinit 00:17:36.179 21:32:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:36.179 21:32:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.179 21:32:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:36.179 21:32:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:36.179 21:32:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:36.179 21:32:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.179 21:32:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:36.179 21:32:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.179 21:32:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:36.179 21:32:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:36.179 21:32:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:36.179 21:32:01 -- common/autotest_common.sh@10 -- # set +x 00:17:38.090 21:32:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:38.090 21:32:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:38.090 21:32:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:38.090 21:32:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:38.090 21:32:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:38.090 21:32:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:38.090 21:32:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:38.090 21:32:03 -- nvmf/common.sh@295 -- # net_devs=() 00:17:38.090 21:32:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:38.090 21:32:03 -- nvmf/common.sh@296 -- # e810=() 00:17:38.090 21:32:03 -- nvmf/common.sh@296 -- # local -ga e810 00:17:38.090 21:32:03 -- nvmf/common.sh@297 -- # x722=() 00:17:38.090 21:32:03 -- nvmf/common.sh@297 -- # local -ga x722 00:17:38.090 21:32:03 -- nvmf/common.sh@298 -- # mlx=() 00:17:38.090 21:32:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:38.090 21:32:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.090 21:32:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.090 21:32:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.090 21:32:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.090 21:32:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.090 21:32:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.090 21:32:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.090 21:32:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.090 21:32:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.090 21:32:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.090 21:32:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.090 21:32:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:38.090 21:32:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:38.090 21:32:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:38.090 21:32:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:38.090 21:32:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:38.090 21:32:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:38.090 21:32:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:38.090 21:32:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:38.090 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:38.090 21:32:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:38.090 21:32:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:38.090 21:32:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.090 21:32:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.090 21:32:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:38.090 21:32:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:38.090 21:32:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:38.090 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:38.090 21:32:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:38.090 21:32:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:38.090 21:32:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.090 21:32:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.090 21:32:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:38.090 21:32:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:38.090 21:32:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:38.090 21:32:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:38.090 21:32:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:38.090 21:32:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.090 21:32:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:38.090 21:32:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.090 21:32:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:38.090 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:38.090 21:32:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.090 21:32:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:38.090 21:32:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.090 21:32:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:38.090 21:32:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.090 21:32:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:38.090 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:38.090 21:32:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.090 21:32:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:38.090 21:32:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:38.090 21:32:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:38.090 21:32:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:38.090 21:32:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:38.090 21:32:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.090 21:32:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.090 21:32:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:38.090 21:32:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:38.090 21:32:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:38.090 21:32:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:38.090 21:32:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:38.090 21:32:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:38.090 21:32:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.090 21:32:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:38.090 21:32:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:38.090 21:32:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:38.090 21:32:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:38.090 21:32:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:38.090 21:32:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:38.090 21:32:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:38.090 21:32:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:38.090 21:32:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:38.090 21:32:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:38.090 21:32:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:38.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:17:38.090 00:17:38.090 --- 10.0.0.2 ping statistics --- 00:17:38.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.090 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:17:38.090 21:32:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:38.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:17:38.090 00:17:38.090 --- 10.0.0.1 ping statistics --- 00:17:38.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.090 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:17:38.091 21:32:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.091 21:32:03 -- nvmf/common.sh@411 -- # return 0 00:17:38.091 21:32:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:38.091 21:32:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.091 21:32:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:38.091 21:32:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:38.091 21:32:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.091 21:32:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:38.091 21:32:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:38.091 21:32:03 -- host/fio.sh@14 -- # [[ y != y ]] 00:17:38.091 21:32:03 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:17:38.091 21:32:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:38.091 21:32:03 -- common/autotest_common.sh@10 -- # set +x 00:17:38.091 21:32:03 -- host/fio.sh@22 -- # nvmfpid=2642371 00:17:38.091 21:32:03 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:38.091 21:32:03 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:38.091 21:32:03 -- host/fio.sh@26 -- # waitforlisten 2642371 00:17:38.091 21:32:03 -- common/autotest_common.sh@817 -- # '[' -z 2642371 ']' 00:17:38.091 21:32:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.091 21:32:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:38.091 21:32:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.091 21:32:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:38.091 21:32:03 -- common/autotest_common.sh@10 -- # set +x 00:17:38.091 [2024-04-24 21:32:03.684755] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:17:38.091 [2024-04-24 21:32:03.684847] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.091 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.091 [2024-04-24 21:32:03.758754] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:38.351 [2024-04-24 21:32:03.880005] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.351 [2024-04-24 21:32:03.880072] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.351 [2024-04-24 21:32:03.880088] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.351 [2024-04-24 21:32:03.880102] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.351 [2024-04-24 21:32:03.880114] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.351 [2024-04-24 21:32:03.880175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.351 [2024-04-24 21:32:03.880206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.351 [2024-04-24 21:32:03.880342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:38.351 [2024-04-24 21:32:03.880344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.351 21:32:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:38.351 21:32:04 -- common/autotest_common.sh@850 -- # return 0 00:17:38.351 21:32:04 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:38.351 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.351 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:17:38.351 [2024-04-24 21:32:04.017295] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.351 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.351 21:32:04 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:17:38.351 21:32:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:38.351 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:17:38.609 21:32:04 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:38.609 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.609 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:17:38.609 Malloc1 00:17:38.609 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.609 21:32:04 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:38.609 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.609 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:17:38.609 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.609 21:32:04 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:38.609 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.609 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:17:38.609 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.609 21:32:04 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:38.609 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.609 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:17:38.609 [2024-04-24 21:32:04.094454] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.609 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.609 21:32:04 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:38.609 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.609 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:17:38.609 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.609 21:32:04 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:17:38.609 21:32:04 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:38.609 21:32:04 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:38.609 21:32:04 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:17:38.609 21:32:04 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:38.609 21:32:04 -- common/autotest_common.sh@1325 -- # local sanitizers 00:17:38.609 21:32:04 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:38.609 21:32:04 -- common/autotest_common.sh@1327 -- # shift 00:17:38.609 21:32:04 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:17:38.609 21:32:04 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:38.609 21:32:04 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:38.609 21:32:04 -- common/autotest_common.sh@1331 -- # grep libasan 00:17:38.609 21:32:04 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:38.609 21:32:04 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:38.609 21:32:04 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:38.609 21:32:04 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:38.609 21:32:04 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:38.609 21:32:04 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:17:38.609 21:32:04 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:38.609 21:32:04 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:38.609 21:32:04 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:38.609 21:32:04 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:17:38.609 21:32:04 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:38.868 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:38.868 fio-3.35 00:17:38.868 Starting 1 thread 00:17:38.868 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.402 00:17:41.402 test: (groupid=0, jobs=1): err= 0: pid=2642505: Wed Apr 24 21:32:06 2024 00:17:41.402 read: IOPS=9059, BW=35.4MiB/s (37.1MB/s)(71.0MiB/2006msec) 00:17:41.402 slat (nsec): min=1927, max=163982, avg=2573.92, stdev=1797.90 00:17:41.402 clat (usec): min=2448, max=13689, avg=7827.06, stdev=583.83 00:17:41.402 lat (usec): min=2475, max=13691, avg=7829.63, stdev=583.73 00:17:41.402 clat percentiles (usec): 00:17:41.402 | 1.00th=[ 6521], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7373], 00:17:41.402 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7963], 00:17:41.402 | 70.00th=[ 8094], 80.00th=[ 8291], 90.00th=[ 8455], 95.00th=[ 8717], 00:17:41.402 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[11731], 99.95th=[12125], 00:17:41.402 | 99.99th=[13698] 00:17:41.402 bw ( KiB/s): min=35440, max=36928, per=99.87%, avg=36192.00, stdev=622.84, samples=4 00:17:41.402 iops : min= 8860, max= 9232, avg=9048.00, stdev=155.71, samples=4 00:17:41.402 write: IOPS=9071, BW=35.4MiB/s (37.2MB/s)(71.1MiB/2006msec); 0 zone resets 00:17:41.402 slat (usec): min=2, max=130, avg= 2.71, stdev= 1.38 00:17:41.402 clat (usec): min=1598, max=11894, avg=6257.23, stdev=510.59 00:17:41.402 lat (usec): min=1607, max=11896, avg=6259.94, stdev=510.54 00:17:41.402 clat percentiles (usec): 00:17:41.402 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:17:41.402 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6390], 00:17:41.402 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[ 6849], 95.00th=[ 7046], 00:17:41.402 | 99.00th=[ 7373], 99.50th=[ 7504], 99.90th=[ 9634], 99.95th=[10814], 00:17:41.402 | 99.99th=[11863] 00:17:41.402 bw ( KiB/s): min=36200, max=36416, per=100.00%, avg=36298.00, stdev=88.93, samples=4 00:17:41.402 iops : min= 9050, max= 9104, avg=9074.50, stdev=22.23, samples=4 00:17:41.402 lat (msec) : 2=0.01%, 4=0.10%, 10=99.76%, 20=0.13% 00:17:41.402 cpu : usr=51.62%, sys=38.80%, ctx=74, majf=0, minf=5 00:17:41.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:41.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:41.402 issued rwts: total=18174,18198,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:41.402 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:41.402 00:17:41.402 Run status group 0 (all jobs): 00:17:41.402 READ: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=71.0MiB (74.4MB), run=2006-2006msec 00:17:41.402 WRITE: bw=35.4MiB/s (37.2MB/s), 35.4MiB/s-35.4MiB/s (37.2MB/s-37.2MB/s), io=71.1MiB (74.5MB), run=2006-2006msec 00:17:41.402 21:32:06 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:41.402 21:32:06 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:41.402 21:32:06 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:17:41.402 21:32:06 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:41.402 21:32:06 -- common/autotest_common.sh@1325 -- # local sanitizers 00:17:41.402 21:32:06 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:41.402 21:32:06 -- common/autotest_common.sh@1327 -- # shift 00:17:41.402 21:32:06 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:17:41.402 21:32:06 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:41.402 21:32:06 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:41.402 21:32:06 -- common/autotest_common.sh@1331 -- # grep libasan 00:17:41.402 21:32:06 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:41.402 21:32:06 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:41.402 21:32:06 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:41.402 21:32:06 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:41.402 21:32:06 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:41.402 21:32:06 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:17:41.402 21:32:06 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:41.402 21:32:06 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:41.402 21:32:06 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:41.402 21:32:06 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:17:41.402 21:32:06 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:41.402 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:41.402 fio-3.35 00:17:41.402 Starting 1 thread 00:17:41.402 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.932 00:17:43.932 test: (groupid=0, jobs=1): err= 0: pid=2642955: Wed Apr 24 21:32:09 2024 00:17:43.932 read: IOPS=7776, BW=122MiB/s (127MB/s)(244MiB/2008msec) 00:17:43.932 slat (nsec): min=2855, max=91907, avg=3639.33, stdev=1594.78 00:17:43.932 clat (usec): min=2665, max=20881, avg=9998.08, stdev=2556.99 00:17:43.932 lat (usec): min=2669, max=20885, avg=10001.72, stdev=2557.12 00:17:43.932 clat percentiles (usec): 00:17:43.932 | 1.00th=[ 4817], 5.00th=[ 6194], 10.00th=[ 6915], 20.00th=[ 7832], 00:17:43.932 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[10421], 00:17:43.932 | 70.00th=[11076], 80.00th=[11863], 90.00th=[13435], 95.00th=[14746], 00:17:43.932 | 99.00th=[16909], 99.50th=[17695], 99.90th=[19792], 99.95th=[20317], 00:17:43.932 | 99.99th=[20841] 00:17:43.932 bw ( KiB/s): min=59712, max=69120, per=50.86%, avg=63280.00, stdev=4106.78, samples=4 00:17:43.932 iops : min= 3732, max= 4320, avg=3955.00, stdev=256.67, samples=4 00:17:43.932 write: IOPS=4574, BW=71.5MiB/s (74.9MB/s)(129MiB/1810msec); 0 zone resets 00:17:43.932 slat (usec): min=30, max=190, avg=34.09, stdev= 5.82 00:17:43.932 clat (usec): min=2717, max=20779, avg=11371.17, stdev=2009.93 00:17:43.932 lat (usec): min=2748, max=20811, avg=11405.25, stdev=2011.19 00:17:43.932 clat percentiles (usec): 00:17:43.932 | 1.00th=[ 7504], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9634], 00:17:43.932 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11207], 60.00th=[11600], 00:17:43.932 | 70.00th=[12125], 80.00th=[12911], 90.00th=[13829], 95.00th=[14877], 00:17:43.932 | 99.00th=[17695], 99.50th=[19530], 99.90th=[20055], 99.95th=[20317], 00:17:43.932 | 99.99th=[20841] 00:17:43.932 bw ( KiB/s): min=61408, max=72288, per=90.03%, avg=65896.00, stdev=4620.31, samples=4 00:17:43.932 iops : min= 3838, max= 4518, avg=4118.50, stdev=288.77, samples=4 00:17:43.932 lat (msec) : 4=0.24%, 10=42.92%, 20=56.72%, 50=0.12% 00:17:43.932 cpu : usr=71.75%, sys=22.82%, ctx=28, majf=0, minf=1 00:17:43.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:17:43.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:43.932 issued rwts: total=15616,8280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:43.932 00:17:43.932 Run status group 0 (all jobs): 00:17:43.932 READ: bw=122MiB/s (127MB/s), 122MiB/s-122MiB/s (127MB/s-127MB/s), io=244MiB (256MB), run=2008-2008msec 00:17:43.932 WRITE: bw=71.5MiB/s (74.9MB/s), 71.5MiB/s-71.5MiB/s (74.9MB/s-74.9MB/s), io=129MiB (136MB), run=1810-1810msec 00:17:43.932 21:32:09 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:43.932 21:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.932 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:17:43.932 21:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.932 21:32:09 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:17:43.932 21:32:09 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:17:43.932 21:32:09 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:17:43.932 21:32:09 -- host/fio.sh@84 -- # nvmftestfini 00:17:43.932 21:32:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:43.932 21:32:09 -- nvmf/common.sh@117 -- # sync 00:17:43.932 21:32:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:43.932 21:32:09 -- nvmf/common.sh@120 -- # set +e 00:17:43.932 21:32:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:43.932 21:32:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:43.932 rmmod nvme_tcp 00:17:43.932 rmmod nvme_fabrics 00:17:43.932 rmmod nvme_keyring 00:17:43.932 21:32:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:43.932 21:32:09 -- nvmf/common.sh@124 -- # set -e 00:17:43.932 21:32:09 -- nvmf/common.sh@125 -- # return 0 00:17:43.932 21:32:09 -- nvmf/common.sh@478 -- # '[' -n 2642371 ']' 00:17:43.932 21:32:09 -- nvmf/common.sh@479 -- # killprocess 2642371 00:17:43.932 21:32:09 -- common/autotest_common.sh@936 -- # '[' -z 2642371 ']' 00:17:43.932 21:32:09 -- common/autotest_common.sh@940 -- # kill -0 2642371 00:17:43.932 21:32:09 -- common/autotest_common.sh@941 -- # uname 00:17:43.932 21:32:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:43.932 21:32:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2642371 00:17:43.932 21:32:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:43.932 21:32:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:43.932 21:32:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2642371' 00:17:43.932 killing process with pid 2642371 00:17:43.932 21:32:09 -- common/autotest_common.sh@955 -- # kill 2642371 00:17:43.932 21:32:09 -- common/autotest_common.sh@960 -- # wait 2642371 00:17:44.192 21:32:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:44.192 21:32:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:44.192 21:32:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:44.192 21:32:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:44.192 21:32:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:44.192 21:32:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.192 21:32:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.192 21:32:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.095 21:32:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:46.095 00:17:46.095 real 0m10.309s 00:17:46.095 user 0m26.216s 00:17:46.095 sys 0m3.868s 00:17:46.095 21:32:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:46.095 21:32:11 -- common/autotest_common.sh@10 -- # set +x 00:17:46.095 ************************************ 00:17:46.095 END TEST nvmf_fio_host 00:17:46.095 ************************************ 00:17:46.095 21:32:11 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:46.095 21:32:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:46.095 21:32:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:46.095 21:32:11 -- common/autotest_common.sh@10 -- # set +x 00:17:46.355 ************************************ 00:17:46.355 START TEST nvmf_failover 00:17:46.355 ************************************ 00:17:46.355 21:32:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:46.355 * Looking for test storage... 00:17:46.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:46.355 21:32:11 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:46.355 21:32:11 -- nvmf/common.sh@7 -- # uname -s 00:17:46.355 21:32:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.355 21:32:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.355 21:32:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.355 21:32:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.355 21:32:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.355 21:32:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.355 21:32:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.355 21:32:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.355 21:32:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.355 21:32:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.356 21:32:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:46.356 21:32:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:46.356 21:32:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.356 21:32:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.356 21:32:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:46.356 21:32:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.356 21:32:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:46.356 21:32:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.356 21:32:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.356 21:32:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.356 21:32:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.356 21:32:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.356 21:32:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.356 21:32:11 -- paths/export.sh@5 -- # export PATH 00:17:46.356 21:32:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.356 21:32:11 -- nvmf/common.sh@47 -- # : 0 00:17:46.356 21:32:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:46.356 21:32:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:46.356 21:32:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.356 21:32:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.356 21:32:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.356 21:32:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:46.356 21:32:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:46.356 21:32:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:46.356 21:32:11 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:46.356 21:32:11 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:46.356 21:32:11 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:46.356 21:32:11 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:46.356 21:32:11 -- host/failover.sh@18 -- # nvmftestinit 00:17:46.356 21:32:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:46.356 21:32:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.356 21:32:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:46.356 21:32:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:46.356 21:32:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:46.356 21:32:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.356 21:32:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.356 21:32:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.356 21:32:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:46.356 21:32:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:46.356 21:32:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:46.356 21:32:11 -- common/autotest_common.sh@10 -- # set +x 00:17:48.260 21:32:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:48.260 21:32:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:48.260 21:32:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:48.260 21:32:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:48.260 21:32:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:48.260 21:32:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:48.260 21:32:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:48.260 21:32:13 -- nvmf/common.sh@295 -- # net_devs=() 00:17:48.260 21:32:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:48.260 21:32:13 -- nvmf/common.sh@296 -- # e810=() 00:17:48.260 21:32:13 -- nvmf/common.sh@296 -- # local -ga e810 00:17:48.260 21:32:13 -- nvmf/common.sh@297 -- # x722=() 00:17:48.260 21:32:13 -- nvmf/common.sh@297 -- # local -ga x722 00:17:48.260 21:32:13 -- nvmf/common.sh@298 -- # mlx=() 00:17:48.260 21:32:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:48.260 21:32:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:48.260 21:32:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:48.260 21:32:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:48.260 21:32:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:48.260 21:32:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:48.260 21:32:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:48.260 21:32:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:48.260 21:32:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:48.260 21:32:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:48.260 21:32:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:48.260 21:32:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:48.260 21:32:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:48.260 21:32:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:48.260 21:32:13 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:48.260 21:32:13 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:48.260 21:32:13 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:48.260 21:32:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:48.260 21:32:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:48.260 21:32:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:48.260 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:48.260 21:32:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:48.260 21:32:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:48.260 21:32:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.260 21:32:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.260 21:32:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:48.260 21:32:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:48.260 21:32:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:48.260 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:48.260 21:32:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:48.260 21:32:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:48.260 21:32:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.260 21:32:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.260 21:32:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:48.260 21:32:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:48.260 21:32:13 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:48.260 21:32:13 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:48.260 21:32:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:48.260 21:32:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.260 21:32:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:48.260 21:32:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.260 21:32:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:48.260 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:48.260 21:32:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.260 21:32:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:48.260 21:32:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.260 21:32:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:48.260 21:32:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.260 21:32:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:48.260 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:48.260 21:32:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.260 21:32:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:48.260 21:32:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:48.261 21:32:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:48.261 21:32:13 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:48.261 21:32:13 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:48.261 21:32:13 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.261 21:32:13 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:48.261 21:32:13 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:48.261 21:32:13 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:48.261 21:32:13 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:48.261 21:32:13 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:48.261 21:32:13 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:48.261 21:32:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:48.261 21:32:13 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.261 21:32:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:48.261 21:32:13 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:48.261 21:32:13 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:48.261 21:32:13 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:48.261 21:32:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:48.261 21:32:13 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:48.261 21:32:13 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:48.261 21:32:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:48.520 21:32:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:48.520 21:32:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:48.520 21:32:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:48.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:17:48.520 00:17:48.520 --- 10.0.0.2 ping statistics --- 00:17:48.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.520 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:17:48.520 21:32:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:48.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:17:48.520 00:17:48.520 --- 10.0.0.1 ping statistics --- 00:17:48.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.520 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:17:48.520 21:32:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.520 21:32:13 -- nvmf/common.sh@411 -- # return 0 00:17:48.520 21:32:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:48.520 21:32:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.520 21:32:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:48.520 21:32:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:48.520 21:32:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.520 21:32:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:48.520 21:32:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:48.520 21:32:14 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:17:48.520 21:32:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:48.520 21:32:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:48.520 21:32:14 -- common/autotest_common.sh@10 -- # set +x 00:17:48.520 21:32:14 -- nvmf/common.sh@470 -- # nvmfpid=2645156 00:17:48.520 21:32:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:48.520 21:32:14 -- nvmf/common.sh@471 -- # waitforlisten 2645156 00:17:48.520 21:32:14 -- common/autotest_common.sh@817 -- # '[' -z 2645156 ']' 00:17:48.520 21:32:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.520 21:32:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:48.520 21:32:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.520 21:32:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:48.520 21:32:14 -- common/autotest_common.sh@10 -- # set +x 00:17:48.520 [2024-04-24 21:32:14.056704] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:17:48.520 [2024-04-24 21:32:14.056796] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.520 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.520 [2024-04-24 21:32:14.121235] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:48.779 [2024-04-24 21:32:14.228745] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.779 [2024-04-24 21:32:14.228806] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.779 [2024-04-24 21:32:14.228835] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.779 [2024-04-24 21:32:14.228847] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.779 [2024-04-24 21:32:14.228857] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.779 [2024-04-24 21:32:14.228934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.779 [2024-04-24 21:32:14.228995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:48.779 [2024-04-24 21:32:14.228998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.779 21:32:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:48.779 21:32:14 -- common/autotest_common.sh@850 -- # return 0 00:17:48.779 21:32:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:48.779 21:32:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:48.779 21:32:14 -- common/autotest_common.sh@10 -- # set +x 00:17:48.779 21:32:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.779 21:32:14 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:49.037 [2024-04-24 21:32:14.587782] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.037 21:32:14 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:49.295 Malloc0 00:17:49.295 21:32:14 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:49.553 21:32:15 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:49.811 21:32:15 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:50.376 [2024-04-24 21:32:15.748353] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.376 21:32:15 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:50.376 [2024-04-24 21:32:16.029150] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:50.376 21:32:16 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:50.634 [2024-04-24 21:32:16.270043] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:50.634 21:32:16 -- host/failover.sh@31 -- # bdevperf_pid=2645447 00:17:50.634 21:32:16 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:50.634 21:32:16 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:50.634 21:32:16 -- host/failover.sh@34 -- # waitforlisten 2645447 /var/tmp/bdevperf.sock 00:17:50.634 21:32:16 -- common/autotest_common.sh@817 -- # '[' -z 2645447 ']' 00:17:50.634 21:32:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:50.634 21:32:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:50.634 21:32:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:50.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:50.634 21:32:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:50.634 21:32:16 -- common/autotest_common.sh@10 -- # set +x 00:17:51.200 21:32:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:51.200 21:32:16 -- common/autotest_common.sh@850 -- # return 0 00:17:51.200 21:32:16 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:51.458 NVMe0n1 00:17:51.458 21:32:17 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:52.024 00:17:52.024 21:32:17 -- host/failover.sh@39 -- # run_test_pid=2645579 00:17:52.024 21:32:17 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:52.024 21:32:17 -- host/failover.sh@41 -- # sleep 1 00:17:52.958 21:32:18 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:53.217 [2024-04-24 21:32:18.636593] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636671] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636715] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636727] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636739] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636751] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636811] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636925] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636971] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636983] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.636994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.637006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.637017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.637029] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.637041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.637052] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.637064] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.637076] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.637088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.637099] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.637111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.637122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.637134] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.637146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.637158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.637170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.637182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.637193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.637206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.637218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.637233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.217 [2024-04-24 21:32:18.637246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.218 [2024-04-24 21:32:18.637258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.218 [2024-04-24 21:32:18.637269] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.218 [2024-04-24 21:32:18.637281] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.218 [2024-04-24 21:32:18.637293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.218 [2024-04-24 21:32:18.637304] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.218 [2024-04-24 21:32:18.637316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.218 [2024-04-24 21:32:18.637327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.218 [2024-04-24 21:32:18.637339] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.218 [2024-04-24 21:32:18.637351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.218 [2024-04-24 21:32:18.637362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49370 is same with the state(5) to be set 00:17:53.218 21:32:18 -- host/failover.sh@45 -- # sleep 3 00:17:56.498 21:32:21 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:56.498 00:17:56.498 21:32:22 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:56.756 [2024-04-24 21:32:22.346037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346141] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346189] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346331] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346342] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346365] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346387] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346399] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.756 [2024-04-24 21:32:22.346410] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.757 [2024-04-24 21:32:22.346421] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.757 [2024-04-24 21:32:22.346433] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.757 [2024-04-24 21:32:22.346444] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.757 [2024-04-24 21:32:22.346456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.757 [2024-04-24 21:32:22.346467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.757 [2024-04-24 21:32:22.346478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.757 [2024-04-24 21:32:22.346489] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.757 [2024-04-24 21:32:22.346501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.757 [2024-04-24 21:32:22.346513] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.757 [2024-04-24 21:32:22.346524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.757 [2024-04-24 21:32:22.346536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.757 [2024-04-24 21:32:22.346547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.757 [2024-04-24 21:32:22.346558] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.757 [2024-04-24 21:32:22.346569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.757 [2024-04-24 21:32:22.346584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.757 [2024-04-24 21:32:22.346596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.757 [2024-04-24 21:32:22.346623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.757 [2024-04-24 21:32:22.346645] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49b80 is same with the state(5) to be set 00:17:56.757 21:32:22 -- host/failover.sh@50 -- # sleep 3 00:18:00.034 21:32:25 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:00.034 [2024-04-24 21:32:25.637933] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.034 21:32:25 -- host/failover.sh@55 -- # sleep 1 00:18:01.407 21:32:26 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:01.407 [2024-04-24 21:32:26.888689] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0190 is same with the state(5) to be set 00:18:01.407 [2024-04-24 21:32:26.888751] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0190 is same with the state(5) to be set 00:18:01.407 [2024-04-24 21:32:26.888766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0190 is same with the state(5) to be set 00:18:01.407 [2024-04-24 21:32:26.888778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0190 is same with the state(5) to be set 00:18:01.407 [2024-04-24 21:32:26.888791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0190 is same with the state(5) to be set 00:18:01.407 [2024-04-24 21:32:26.888803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0190 is same with the state(5) to be set 00:18:01.407 [2024-04-24 21:32:26.888814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0190 is same with the state(5) to be set 00:18:01.408 [2024-04-24 21:32:26.888827] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0190 is same with the state(5) to be set 00:18:01.408 [2024-04-24 21:32:26.888839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0190 is same with the state(5) to be set 00:18:01.408 [2024-04-24 21:32:26.888851] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0190 is same with the state(5) to be set 00:18:01.408 [2024-04-24 21:32:26.888863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0190 is same with the state(5) to be set 00:18:01.408 [2024-04-24 21:32:26.888875] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0190 is same with the state(5) to be set 00:18:01.408 [2024-04-24 21:32:26.888886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0190 is same with the state(5) to be set 00:18:01.408 [2024-04-24 21:32:26.888898] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0190 is same with the state(5) to be set 00:18:01.408 [2024-04-24 21:32:26.888921] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0190 is same with the state(5) to be set 00:18:01.408 21:32:26 -- host/failover.sh@59 -- # wait 2645579 00:18:07.980 0 00:18:07.980 21:32:32 -- host/failover.sh@61 -- # killprocess 2645447 00:18:07.980 21:32:32 -- common/autotest_common.sh@936 -- # '[' -z 2645447 ']' 00:18:07.980 21:32:32 -- common/autotest_common.sh@940 -- # kill -0 2645447 00:18:07.980 21:32:32 -- common/autotest_common.sh@941 -- # uname 00:18:07.980 21:32:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:07.980 21:32:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2645447 00:18:07.980 21:32:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:07.980 21:32:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:07.980 21:32:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2645447' 00:18:07.980 killing process with pid 2645447 00:18:07.980 21:32:32 -- common/autotest_common.sh@955 -- # kill 2645447 00:18:07.980 21:32:32 -- common/autotest_common.sh@960 -- # wait 2645447 00:18:07.980 21:32:32 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:07.980 [2024-04-24 21:32:16.332371] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:18:07.980 [2024-04-24 21:32:16.332447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2645447 ] 00:18:07.980 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.980 [2024-04-24 21:32:16.392961] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.980 [2024-04-24 21:32:16.503229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.980 Running I/O for 15 seconds... 00:18:07.980 [2024-04-24 21:32:18.637773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.637815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.980 [2024-04-24 21:32:18.637842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.637857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.980 [2024-04-24 21:32:18.637874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.637888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.980 [2024-04-24 21:32:18.637903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.637917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.980 [2024-04-24 21:32:18.637932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.637945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.980 [2024-04-24 21:32:18.637961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.637974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.980 [2024-04-24 21:32:18.637989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.638003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.980 [2024-04-24 21:32:18.638018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.638031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.980 [2024-04-24 21:32:18.638045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.638059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.980 [2024-04-24 21:32:18.638074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.638087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.980 [2024-04-24 21:32:18.638101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.638114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.980 [2024-04-24 21:32:18.638141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.638156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.980 [2024-04-24 21:32:18.638187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.638200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.980 [2024-04-24 21:32:18.638214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.638226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.980 [2024-04-24 21:32:18.638241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.638254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.980 [2024-04-24 21:32:18.638268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.638281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.980 [2024-04-24 21:32:18.638295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.638308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.980 [2024-04-24 21:32:18.638322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.638335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.980 [2024-04-24 21:32:18.638349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.638363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.980 [2024-04-24 21:32:18.638377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.638390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.980 [2024-04-24 21:32:18.638405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.638418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.980 [2024-04-24 21:32:18.638432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.638445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.980 [2024-04-24 21:32:18.638459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.980 [2024-04-24 21:32:18.638472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.638487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.638504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.638519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.638532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.638546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.638559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.638573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.638586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.638600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.638634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.638651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.638665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.638680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.638694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.638708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.638722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.638737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.638750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.638765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.638778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.638793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.638806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.638821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.638835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.638849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.638862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.638881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.638895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.638910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.638923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.638954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.638967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.638981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.638995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.639009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.639021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.639035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.639048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.639062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.639075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.639089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.639102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.639116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.639129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.639143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.639156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.639170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.639183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.639197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.639210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.639224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.639240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.639255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.639268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.639282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.639295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.639309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.639322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.639336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.639349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.639363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.639375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.639390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.639403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.639417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.639430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.639444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.639457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.639471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.639483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.639498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.981 [2024-04-24 21:32:18.639511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.639525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.981 [2024-04-24 21:32:18.639538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.639552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.981 [2024-04-24 21:32:18.639565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.981 [2024-04-24 21:32:18.639579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.981 [2024-04-24 21:32:18.639595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.639625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.639647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.639663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.639677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.639692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.639705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.639720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.639733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.639748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.639761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.639776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.639789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.639804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.639817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.639832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.639845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.639860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.639873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.639888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.639911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.639926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.639954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.639969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.639982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.982 [2024-04-24 21:32:18.640749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.982 [2024-04-24 21:32:18.640762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.640777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.640791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.640806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.640819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.640834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.640847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.640861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.640875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.640890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.640903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.640918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.640946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.640960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.640973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.640987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.641000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.641027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.641055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.641082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.641113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.641140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.641166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.641193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.641221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.641247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.641274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.641300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.641327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.641353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.641380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.983 [2024-04-24 21:32:18.641406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.983 [2024-04-24 21:32:18.641437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.983 [2024-04-24 21:32:18.641465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.983 [2024-04-24 21:32:18.641492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.983 [2024-04-24 21:32:18.641541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.983 [2024-04-24 21:32:18.641553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74984 len:8 PRP1 0x0 PRP2 0x0 00:18:07.983 [2024-04-24 21:32:18.641565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641649] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14b4e70 was disconnected and freed. reset controller. 00:18:07.983 [2024-04-24 21:32:18.641669] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:07.983 [2024-04-24 21:32:18.641700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.983 [2024-04-24 21:32:18.641718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.983 [2024-04-24 21:32:18.641745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.983 [2024-04-24 21:32:18.641771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.983 [2024-04-24 21:32:18.641796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:18.641809] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:07.983 [2024-04-24 21:32:18.641854] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14963e0 (9): Bad file descriptor 00:18:07.983 [2024-04-24 21:32:18.645064] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:07.983 [2024-04-24 21:32:18.761916] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:07.983 [2024-04-24 21:32:22.347525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.983 [2024-04-24 21:32:22.347569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:22.347595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.983 [2024-04-24 21:32:22.347626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:22.347659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.983 [2024-04-24 21:32:22.347674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:22.347695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.983 [2024-04-24 21:32:22.347708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:22.347723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.983 [2024-04-24 21:32:22.347736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.983 [2024-04-24 21:32:22.347751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.983 [2024-04-24 21:32:22.347764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.347779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.347793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.347808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.347821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.347835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.347849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.347864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.347877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.347892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.347905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.347920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.347948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.347963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.347976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.347991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.984 [2024-04-24 21:32:22.348880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.984 [2024-04-24 21:32:22.348895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.985 [2024-04-24 21:32:22.348909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.348923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.985 [2024-04-24 21:32:22.348937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.348953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.985 [2024-04-24 21:32:22.348966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.348985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.985 [2024-04-24 21:32:22.348999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.985 [2024-04-24 21:32:22.349027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.985 [2024-04-24 21:32:22.349055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.985 [2024-04-24 21:32:22.349084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.985 [2024-04-24 21:32:22.349111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.985 [2024-04-24 21:32:22.349375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.349977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.349991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.350006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.350019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.350034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.985 [2024-04-24 21:32:22.350047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.985 [2024-04-24 21:32:22.350062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.986 [2024-04-24 21:32:22.350775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.986 [2024-04-24 21:32:22.350823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97216 len:8 PRP1 0x0 PRP2 0x0 00:18:07.986 [2024-04-24 21:32:22.350841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.986 [2024-04-24 21:32:22.350870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.986 [2024-04-24 21:32:22.350881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97224 len:8 PRP1 0x0 PRP2 0x0 00:18:07.986 [2024-04-24 21:32:22.350893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.986 [2024-04-24 21:32:22.350917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.986 [2024-04-24 21:32:22.350932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97232 len:8 PRP1 0x0 PRP2 0x0 00:18:07.986 [2024-04-24 21:32:22.350944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.350957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.986 [2024-04-24 21:32:22.350967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.986 [2024-04-24 21:32:22.350978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97240 len:8 PRP1 0x0 PRP2 0x0 00:18:07.986 [2024-04-24 21:32:22.350991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.351003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.986 [2024-04-24 21:32:22.351013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.986 [2024-04-24 21:32:22.351024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97248 len:8 PRP1 0x0 PRP2 0x0 00:18:07.986 [2024-04-24 21:32:22.351036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.351049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.986 [2024-04-24 21:32:22.351060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.986 [2024-04-24 21:32:22.351071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97256 len:8 PRP1 0x0 PRP2 0x0 00:18:07.986 [2024-04-24 21:32:22.351088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.351101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.986 [2024-04-24 21:32:22.351112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.986 [2024-04-24 21:32:22.351123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97264 len:8 PRP1 0x0 PRP2 0x0 00:18:07.986 [2024-04-24 21:32:22.351136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.351148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.986 [2024-04-24 21:32:22.351158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.986 [2024-04-24 21:32:22.351169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97272 len:8 PRP1 0x0 PRP2 0x0 00:18:07.986 [2024-04-24 21:32:22.351182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.351194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.986 [2024-04-24 21:32:22.351204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.986 [2024-04-24 21:32:22.351218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97280 len:8 PRP1 0x0 PRP2 0x0 00:18:07.986 [2024-04-24 21:32:22.351231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.986 [2024-04-24 21:32:22.351244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.986 [2024-04-24 21:32:22.351255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.987 [2024-04-24 21:32:22.351266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97288 len:8 PRP1 0x0 PRP2 0x0 00:18:07.987 [2024-04-24 21:32:22.351278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:22.351290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.987 [2024-04-24 21:32:22.351300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.987 [2024-04-24 21:32:22.351311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97296 len:8 PRP1 0x0 PRP2 0x0 00:18:07.987 [2024-04-24 21:32:22.351324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:22.351337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.987 [2024-04-24 21:32:22.351347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.987 [2024-04-24 21:32:22.351358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97304 len:8 PRP1 0x0 PRP2 0x0 00:18:07.987 [2024-04-24 21:32:22.351371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:22.351384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.987 [2024-04-24 21:32:22.351394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.987 [2024-04-24 21:32:22.351405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97312 len:8 PRP1 0x0 PRP2 0x0 00:18:07.987 [2024-04-24 21:32:22.351417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:22.351430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.987 [2024-04-24 21:32:22.351441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.987 [2024-04-24 21:32:22.351451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97320 len:8 PRP1 0x0 PRP2 0x0 00:18:07.987 [2024-04-24 21:32:22.351464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:22.351477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.987 [2024-04-24 21:32:22.351488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.987 [2024-04-24 21:32:22.351498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97328 len:8 PRP1 0x0 PRP2 0x0 00:18:07.987 [2024-04-24 21:32:22.351510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:22.351523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.987 [2024-04-24 21:32:22.351533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.987 [2024-04-24 21:32:22.351545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97336 len:8 PRP1 0x0 PRP2 0x0 00:18:07.987 [2024-04-24 21:32:22.351557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:22.351569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.987 [2024-04-24 21:32:22.351583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.987 [2024-04-24 21:32:22.351595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97344 len:8 PRP1 0x0 PRP2 0x0 00:18:07.987 [2024-04-24 21:32:22.351607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:22.351620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.987 [2024-04-24 21:32:22.351637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.987 [2024-04-24 21:32:22.351649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97352 len:8 PRP1 0x0 PRP2 0x0 00:18:07.987 [2024-04-24 21:32:22.351662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:22.351684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.987 [2024-04-24 21:32:22.351694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.987 [2024-04-24 21:32:22.351705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97360 len:8 PRP1 0x0 PRP2 0x0 00:18:07.987 [2024-04-24 21:32:22.351717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:22.351730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.987 [2024-04-24 21:32:22.351741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.987 [2024-04-24 21:32:22.351752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96712 len:8 PRP1 0x0 PRP2 0x0 00:18:07.987 [2024-04-24 21:32:22.351764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:22.351777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.987 [2024-04-24 21:32:22.351787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.987 [2024-04-24 21:32:22.351798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96720 len:8 PRP1 0x0 PRP2 0x0 00:18:07.987 [2024-04-24 21:32:22.351811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:22.351823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.987 [2024-04-24 21:32:22.351834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.987 [2024-04-24 21:32:22.351845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96728 len:8 PRP1 0x0 PRP2 0x0 00:18:07.987 [2024-04-24 21:32:22.351857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:22.351869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.987 [2024-04-24 21:32:22.351880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.987 [2024-04-24 21:32:22.351891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96736 len:8 PRP1 0x0 PRP2 0x0 00:18:07.987 [2024-04-24 21:32:22.351902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:22.351915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.987 [2024-04-24 21:32:22.351925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.987 [2024-04-24 21:32:22.351936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96744 len:8 PRP1 0x0 PRP2 0x0 00:18:07.987 [2024-04-24 21:32:22.351948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:22.351966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.987 [2024-04-24 21:32:22.351977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.987 [2024-04-24 21:32:22.351988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96752 len:8 PRP1 0x0 PRP2 0x0 00:18:07.987 [2024-04-24 21:32:22.352000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:22.352013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.987 [2024-04-24 21:32:22.352023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.987 [2024-04-24 21:32:22.352034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96760 len:8 PRP1 0x0 PRP2 0x0 00:18:07.987 [2024-04-24 21:32:22.352046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:22.352109] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14a28d0 was disconnected and freed. reset controller. 00:18:07.987 [2024-04-24 21:32:22.352126] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:18:07.987 [2024-04-24 21:32:22.352158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.987 [2024-04-24 21:32:22.352176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:22.352190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.987 [2024-04-24 21:32:22.352203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:22.352216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.987 [2024-04-24 21:32:22.352228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:22.352241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.987 [2024-04-24 21:32:22.352254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:22.352266] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:07.987 [2024-04-24 21:32:22.352318] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14963e0 (9): Bad file descriptor 00:18:07.987 [2024-04-24 21:32:22.355532] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:07.987 [2024-04-24 21:32:22.555317] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:07.987 [2024-04-24 21:32:26.889130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.987 [2024-04-24 21:32:26.889171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:26.889200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.987 [2024-04-24 21:32:26.889216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:26.889233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.987 [2024-04-24 21:32:26.889247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:26.889272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.987 [2024-04-24 21:32:26.889302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:26.889318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.987 [2024-04-24 21:32:26.889331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.987 [2024-04-24 21:32:26.889346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.889359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.889374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.889388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.889403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.889416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.889431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.988 [2024-04-24 21:32:26.889444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.889459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.988 [2024-04-24 21:32:26.889472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.889486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.988 [2024-04-24 21:32:26.889500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.889514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.988 [2024-04-24 21:32:26.889527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.889541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.988 [2024-04-24 21:32:26.889554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.889569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.988 [2024-04-24 21:32:26.889582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.889596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.988 [2024-04-24 21:32:26.889625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.889650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.988 [2024-04-24 21:32:26.889668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.889683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.988 [2024-04-24 21:32:26.889697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.889713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.988 [2024-04-24 21:32:26.889726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.889741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.988 [2024-04-24 21:32:26.889755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.889769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.988 [2024-04-24 21:32:26.889783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.889799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.988 [2024-04-24 21:32:26.889812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.889827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.988 [2024-04-24 21:32:26.889840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.889855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.988 [2024-04-24 21:32:26.889868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.889883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.988 [2024-04-24 21:32:26.889896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.889911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.988 [2024-04-24 21:32:26.889924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.889955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.889968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.889982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.889995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.890009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.890022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.890040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.890054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.890068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.890081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.890095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.890108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.890123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.890135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.890150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.890163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.890178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.988 [2024-04-24 21:32:26.890191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.890205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.890218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.890232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.890245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.890259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.890272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.890286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.890299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.890313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.890327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.890341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.890354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.890368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.890384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.890399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.890412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.890427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.890439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.890453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.890466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.890481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.890493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.988 [2024-04-24 21:32:26.890508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.988 [2024-04-24 21:32:26.890520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.890535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.890547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.890562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.890575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.890589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.890603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.890640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.890656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.890671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.890685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.890700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.890713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.890728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.890741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.890756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.890773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.890788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.890802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.890817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.890831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.890846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.890859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.890874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:67064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.890888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.890902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.989 [2024-04-24 21:32:26.890916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.890931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.989 [2024-04-24 21:32:26.890960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.890975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.989 [2024-04-24 21:32:26.890988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.891002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.989 [2024-04-24 21:32:26.891016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.891030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.989 [2024-04-24 21:32:26.891043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.891057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.989 [2024-04-24 21:32:26.891071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.891085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.989 [2024-04-24 21:32:26.891098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.891113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.891126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.891144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.891158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.891172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.891185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.891200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.891213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.891227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.891240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.891255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.891268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.891282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.891295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.891309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.891322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.891337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.891350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.891364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.891377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.891391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.891404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.891418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.891431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.891446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:67168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.891459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.989 [2024-04-24 21:32:26.891473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.989 [2024-04-24 21:32:26.891490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.891505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.891518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.891532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.891546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.891561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.891574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.891588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.891602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.891670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.891687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.891702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.891716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.891731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.891745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.891760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.891774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.891789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.891802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.891817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.891830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.891845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.891858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.891873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.891887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.891925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.891939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.891953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.891967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.891986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.891998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:67384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.990 [2024-04-24 21:32:26.892711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.990 [2024-04-24 21:32:26.892726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.991 [2024-04-24 21:32:26.892739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.991 [2024-04-24 21:32:26.892754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.991 [2024-04-24 21:32:26.892767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.991 [2024-04-24 21:32:26.892782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.991 [2024-04-24 21:32:26.892795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.991 [2024-04-24 21:32:26.892810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.991 [2024-04-24 21:32:26.892823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.991 [2024-04-24 21:32:26.892838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.991 [2024-04-24 21:32:26.892852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.991 [2024-04-24 21:32:26.892868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.991 [2024-04-24 21:32:26.892881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.991 [2024-04-24 21:32:26.892896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.991 [2024-04-24 21:32:26.892909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.991 [2024-04-24 21:32:26.892924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.991 [2024-04-24 21:32:26.892937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.991 [2024-04-24 21:32:26.892967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.991 [2024-04-24 21:32:26.892980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.991 [2024-04-24 21:32:26.892993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a28d0 is same with the state(5) to be set 00:18:07.991 [2024-04-24 21:32:26.893023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.991 [2024-04-24 21:32:26.893034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.991 [2024-04-24 21:32:26.893046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67568 len:8 PRP1 0x0 PRP2 0x0 00:18:07.991 [2024-04-24 21:32:26.893058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.991 [2024-04-24 21:32:26.893119] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14a28d0 was disconnected and freed. reset controller. 00:18:07.991 [2024-04-24 21:32:26.893138] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:18:07.991 [2024-04-24 21:32:26.893183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.991 [2024-04-24 21:32:26.893202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.991 [2024-04-24 21:32:26.893217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.991 [2024-04-24 21:32:26.893230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.991 [2024-04-24 21:32:26.893245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.991 [2024-04-24 21:32:26.893257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.991 [2024-04-24 21:32:26.893271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.991 [2024-04-24 21:32:26.893284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.991 [2024-04-24 21:32:26.893297] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:07.991 [2024-04-24 21:32:26.893345] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14963e0 (9): Bad file descriptor 00:18:07.991 [2024-04-24 21:32:26.896652] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:07.991 [2024-04-24 21:32:26.930824] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:07.991 00:18:07.991 Latency(us) 00:18:07.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.991 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:07.991 Verification LBA range: start 0x0 length 0x4000 00:18:07.991 NVMe0n1 : 15.01 8438.58 32.96 917.33 0.00 13654.47 1080.13 16117.00 00:18:07.991 =================================================================================================================== 00:18:07.991 Total : 8438.58 32.96 917.33 0.00 13654.47 1080.13 16117.00 00:18:07.991 Received shutdown signal, test time was about 15.000000 seconds 00:18:07.991 00:18:07.991 Latency(us) 00:18:07.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.991 =================================================================================================================== 00:18:07.991 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:07.991 21:32:32 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:07.991 21:32:32 -- host/failover.sh@65 -- # count=3 00:18:07.991 21:32:32 -- host/failover.sh@67 -- # (( count != 3 )) 00:18:07.991 21:32:32 -- host/failover.sh@73 -- # bdevperf_pid=2647307 00:18:07.991 21:32:32 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:07.991 21:32:32 -- host/failover.sh@75 -- # waitforlisten 2647307 /var/tmp/bdevperf.sock 00:18:07.991 21:32:32 -- common/autotest_common.sh@817 -- # '[' -z 2647307 ']' 00:18:07.991 21:32:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:07.991 21:32:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:07.991 21:32:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:07.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:07.991 21:32:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:07.991 21:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:07.991 21:32:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:07.991 21:32:33 -- common/autotest_common.sh@850 -- # return 0 00:18:07.991 21:32:33 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:07.991 [2024-04-24 21:32:33.426417] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:07.991 21:32:33 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:08.249 [2024-04-24 21:32:33.659046] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:08.249 21:32:33 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:08.507 NVMe0n1 00:18:08.507 21:32:34 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:09.072 00:18:09.072 21:32:34 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:09.330 00:18:09.330 21:32:34 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:09.330 21:32:34 -- host/failover.sh@82 -- # grep -q NVMe0 00:18:09.587 21:32:35 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:09.844 21:32:35 -- host/failover.sh@87 -- # sleep 3 00:18:13.125 21:32:38 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:13.125 21:32:38 -- host/failover.sh@88 -- # grep -q NVMe0 00:18:13.125 21:32:38 -- host/failover.sh@90 -- # run_test_pid=2648094 00:18:13.125 21:32:38 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:13.125 21:32:38 -- host/failover.sh@92 -- # wait 2648094 00:18:14.060 0 00:18:14.318 21:32:39 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:14.318 [2024-04-24 21:32:32.931472] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:18:14.318 [2024-04-24 21:32:32.931574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2647307 ] 00:18:14.318 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.318 [2024-04-24 21:32:32.996645] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.318 [2024-04-24 21:32:33.102708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.318 [2024-04-24 21:32:35.347219] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:14.318 [2024-04-24 21:32:35.347313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.318 [2024-04-24 21:32:35.347342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.318 [2024-04-24 21:32:35.347360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.318 [2024-04-24 21:32:35.347373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.318 [2024-04-24 21:32:35.347401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.318 [2024-04-24 21:32:35.347416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.318 [2024-04-24 21:32:35.347430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.318 [2024-04-24 21:32:35.347444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.318 [2024-04-24 21:32:35.347458] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:14.318 [2024-04-24 21:32:35.347507] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:14.318 [2024-04-24 21:32:35.347545] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236d3e0 (9): Bad file descriptor 00:18:14.318 [2024-04-24 21:32:35.400573] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:14.318 Running I/O for 1 seconds... 00:18:14.318 00:18:14.318 Latency(us) 00:18:14.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.318 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:14.318 Verification LBA range: start 0x0 length 0x4000 00:18:14.318 NVMe0n1 : 1.01 8672.72 33.88 0.00 0.00 14704.67 2415.12 20000.62 00:18:14.318 =================================================================================================================== 00:18:14.318 Total : 8672.72 33.88 0.00 0.00 14704.67 2415.12 20000.62 00:18:14.318 21:32:39 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:14.318 21:32:39 -- host/failover.sh@95 -- # grep -q NVMe0 00:18:14.318 21:32:39 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:14.575 21:32:40 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:14.575 21:32:40 -- host/failover.sh@99 -- # grep -q NVMe0 00:18:14.833 21:32:40 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:15.091 21:32:40 -- host/failover.sh@101 -- # sleep 3 00:18:18.369 21:32:43 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:18.369 21:32:43 -- host/failover.sh@103 -- # grep -q NVMe0 00:18:18.369 21:32:44 -- host/failover.sh@108 -- # killprocess 2647307 00:18:18.369 21:32:44 -- common/autotest_common.sh@936 -- # '[' -z 2647307 ']' 00:18:18.369 21:32:44 -- common/autotest_common.sh@940 -- # kill -0 2647307 00:18:18.369 21:32:44 -- common/autotest_common.sh@941 -- # uname 00:18:18.628 21:32:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.628 21:32:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2647307 00:18:18.628 21:32:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:18.628 21:32:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:18.628 21:32:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2647307' 00:18:18.628 killing process with pid 2647307 00:18:18.628 21:32:44 -- common/autotest_common.sh@955 -- # kill 2647307 00:18:18.628 21:32:44 -- common/autotest_common.sh@960 -- # wait 2647307 00:18:18.886 21:32:44 -- host/failover.sh@110 -- # sync 00:18:18.886 21:32:44 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:19.144 21:32:44 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:19.144 21:32:44 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:19.144 21:32:44 -- host/failover.sh@116 -- # nvmftestfini 00:18:19.144 21:32:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:19.144 21:32:44 -- nvmf/common.sh@117 -- # sync 00:18:19.144 21:32:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:19.144 21:32:44 -- nvmf/common.sh@120 -- # set +e 00:18:19.144 21:32:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:19.144 21:32:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:19.144 rmmod nvme_tcp 00:18:19.144 rmmod nvme_fabrics 00:18:19.144 rmmod nvme_keyring 00:18:19.144 21:32:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:19.144 21:32:44 -- nvmf/common.sh@124 -- # set -e 00:18:19.144 21:32:44 -- nvmf/common.sh@125 -- # return 0 00:18:19.144 21:32:44 -- nvmf/common.sh@478 -- # '[' -n 2645156 ']' 00:18:19.144 21:32:44 -- nvmf/common.sh@479 -- # killprocess 2645156 00:18:19.144 21:32:44 -- common/autotest_common.sh@936 -- # '[' -z 2645156 ']' 00:18:19.144 21:32:44 -- common/autotest_common.sh@940 -- # kill -0 2645156 00:18:19.144 21:32:44 -- common/autotest_common.sh@941 -- # uname 00:18:19.144 21:32:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:19.144 21:32:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2645156 00:18:19.144 21:32:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:19.144 21:32:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:19.144 21:32:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2645156' 00:18:19.144 killing process with pid 2645156 00:18:19.144 21:32:44 -- common/autotest_common.sh@955 -- # kill 2645156 00:18:19.144 21:32:44 -- common/autotest_common.sh@960 -- # wait 2645156 00:18:19.402 21:32:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:19.402 21:32:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:19.402 21:32:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:19.402 21:32:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:19.402 21:32:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:19.402 21:32:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.402 21:32:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.402 21:32:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.940 21:32:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:21.940 00:18:21.940 real 0m35.201s 00:18:21.940 user 2m4.367s 00:18:21.940 sys 0m5.632s 00:18:21.940 21:32:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:21.940 21:32:47 -- common/autotest_common.sh@10 -- # set +x 00:18:21.940 ************************************ 00:18:21.940 END TEST nvmf_failover 00:18:21.940 ************************************ 00:18:21.940 21:32:47 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:21.940 21:32:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:21.940 21:32:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:21.940 21:32:47 -- common/autotest_common.sh@10 -- # set +x 00:18:21.940 ************************************ 00:18:21.940 START TEST nvmf_discovery 00:18:21.940 ************************************ 00:18:21.940 21:32:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:21.940 * Looking for test storage... 00:18:21.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:21.940 21:32:47 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:21.940 21:32:47 -- nvmf/common.sh@7 -- # uname -s 00:18:21.940 21:32:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:21.940 21:32:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:21.940 21:32:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:21.940 21:32:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:21.940 21:32:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:21.940 21:32:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:21.940 21:32:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:21.940 21:32:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:21.940 21:32:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:21.940 21:32:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:21.940 21:32:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:21.940 21:32:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:21.940 21:32:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:21.940 21:32:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:21.940 21:32:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:21.940 21:32:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:21.940 21:32:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:21.940 21:32:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:21.940 21:32:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:21.940 21:32:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:21.940 21:32:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.940 21:32:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.940 21:32:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.940 21:32:47 -- paths/export.sh@5 -- # export PATH 00:18:21.940 21:32:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.940 21:32:47 -- nvmf/common.sh@47 -- # : 0 00:18:21.940 21:32:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:21.940 21:32:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:21.940 21:32:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:21.940 21:32:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:21.940 21:32:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:21.940 21:32:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:21.940 21:32:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:21.940 21:32:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:21.940 21:32:47 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:21.940 21:32:47 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:21.940 21:32:47 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:21.940 21:32:47 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:21.940 21:32:47 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:21.940 21:32:47 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:21.940 21:32:47 -- host/discovery.sh@25 -- # nvmftestinit 00:18:21.940 21:32:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:21.940 21:32:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:21.940 21:32:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:21.940 21:32:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:21.940 21:32:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:21.940 21:32:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.940 21:32:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:21.940 21:32:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.940 21:32:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:21.940 21:32:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:21.940 21:32:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:21.940 21:32:47 -- common/autotest_common.sh@10 -- # set +x 00:18:23.841 21:32:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:23.841 21:32:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:23.841 21:32:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:23.841 21:32:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:23.841 21:32:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:23.841 21:32:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:23.841 21:32:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:23.841 21:32:49 -- nvmf/common.sh@295 -- # net_devs=() 00:18:23.841 21:32:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:23.841 21:32:49 -- nvmf/common.sh@296 -- # e810=() 00:18:23.841 21:32:49 -- nvmf/common.sh@296 -- # local -ga e810 00:18:23.841 21:32:49 -- nvmf/common.sh@297 -- # x722=() 00:18:23.841 21:32:49 -- nvmf/common.sh@297 -- # local -ga x722 00:18:23.841 21:32:49 -- nvmf/common.sh@298 -- # mlx=() 00:18:23.841 21:32:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:23.841 21:32:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:23.841 21:32:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:23.841 21:32:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:23.841 21:32:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:23.841 21:32:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:23.841 21:32:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:23.841 21:32:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:23.841 21:32:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:23.841 21:32:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:23.841 21:32:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:23.841 21:32:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:23.841 21:32:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:23.841 21:32:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:23.841 21:32:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:23.841 21:32:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:23.841 21:32:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:23.841 21:32:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:23.841 21:32:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.841 21:32:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:23.841 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:23.841 21:32:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.841 21:32:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.841 21:32:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.841 21:32:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.841 21:32:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.841 21:32:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.841 21:32:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:23.841 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:23.841 21:32:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.841 21:32:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.841 21:32:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.841 21:32:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.841 21:32:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.841 21:32:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:23.841 21:32:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:23.841 21:32:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:23.841 21:32:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.841 21:32:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.841 21:32:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:23.841 21:32:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.841 21:32:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:23.841 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:23.841 21:32:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.841 21:32:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.841 21:32:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.841 21:32:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:23.841 21:32:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.841 21:32:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:23.841 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:23.841 21:32:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.841 21:32:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:23.841 21:32:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:23.841 21:32:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:23.841 21:32:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:23.841 21:32:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:23.841 21:32:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:23.842 21:32:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:23.842 21:32:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:23.842 21:32:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:23.842 21:32:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:23.842 21:32:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:23.842 21:32:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:23.842 21:32:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:23.842 21:32:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:23.842 21:32:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:23.842 21:32:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:23.842 21:32:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:23.842 21:32:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:23.842 21:32:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:23.842 21:32:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:23.842 21:32:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:23.842 21:32:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:23.842 21:32:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:23.842 21:32:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:23.842 21:32:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:23.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:23.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:18:23.842 00:18:23.842 --- 10.0.0.2 ping statistics --- 00:18:23.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.842 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:18:23.842 21:32:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:23.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:23.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:18:23.842 00:18:23.842 --- 10.0.0.1 ping statistics --- 00:18:23.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.842 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:18:23.842 21:32:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:23.842 21:32:49 -- nvmf/common.sh@411 -- # return 0 00:18:23.842 21:32:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:23.842 21:32:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:23.842 21:32:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:23.842 21:32:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:23.842 21:32:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:23.842 21:32:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:23.842 21:32:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:23.842 21:32:49 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:23.842 21:32:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:23.842 21:32:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:23.842 21:32:49 -- common/autotest_common.sh@10 -- # set +x 00:18:23.842 21:32:49 -- nvmf/common.sh@470 -- # nvmfpid=2650711 00:18:23.842 21:32:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:23.842 21:32:49 -- nvmf/common.sh@471 -- # waitforlisten 2650711 00:18:23.842 21:32:49 -- common/autotest_common.sh@817 -- # '[' -z 2650711 ']' 00:18:23.842 21:32:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.842 21:32:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:23.842 21:32:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.842 21:32:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:23.842 21:32:49 -- common/autotest_common.sh@10 -- # set +x 00:18:23.842 [2024-04-24 21:32:49.454273] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:18:23.842 [2024-04-24 21:32:49.454370] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.842 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.100 [2024-04-24 21:32:49.523379] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.100 [2024-04-24 21:32:49.636572] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.100 [2024-04-24 21:32:49.636651] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.100 [2024-04-24 21:32:49.636677] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.100 [2024-04-24 21:32:49.636690] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.100 [2024-04-24 21:32:49.636702] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.100 [2024-04-24 21:32:49.636735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.032 21:32:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:25.032 21:32:50 -- common/autotest_common.sh@850 -- # return 0 00:18:25.032 21:32:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:25.032 21:32:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:25.032 21:32:50 -- common/autotest_common.sh@10 -- # set +x 00:18:25.032 21:32:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.032 21:32:50 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:25.032 21:32:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.032 21:32:50 -- common/autotest_common.sh@10 -- # set +x 00:18:25.032 [2024-04-24 21:32:50.460148] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.032 21:32:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.032 21:32:50 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:18:25.032 21:32:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.032 21:32:50 -- common/autotest_common.sh@10 -- # set +x 00:18:25.032 [2024-04-24 21:32:50.468311] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:25.032 21:32:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.032 21:32:50 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:25.032 21:32:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.032 21:32:50 -- common/autotest_common.sh@10 -- # set +x 00:18:25.032 null0 00:18:25.032 21:32:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.032 21:32:50 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:25.032 21:32:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.032 21:32:50 -- common/autotest_common.sh@10 -- # set +x 00:18:25.032 null1 00:18:25.032 21:32:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.032 21:32:50 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:25.032 21:32:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.032 21:32:50 -- common/autotest_common.sh@10 -- # set +x 00:18:25.032 21:32:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.032 21:32:50 -- host/discovery.sh@45 -- # hostpid=2650863 00:18:25.032 21:32:50 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:25.032 21:32:50 -- host/discovery.sh@46 -- # waitforlisten 2650863 /tmp/host.sock 00:18:25.032 21:32:50 -- common/autotest_common.sh@817 -- # '[' -z 2650863 ']' 00:18:25.032 21:32:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:18:25.032 21:32:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:25.032 21:32:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:25.032 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:25.032 21:32:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:25.032 21:32:50 -- common/autotest_common.sh@10 -- # set +x 00:18:25.032 [2024-04-24 21:32:50.540080] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:18:25.032 [2024-04-24 21:32:50.540150] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2650863 ] 00:18:25.032 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.032 [2024-04-24 21:32:50.601163] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.290 [2024-04-24 21:32:50.715738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.290 21:32:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:25.290 21:32:50 -- common/autotest_common.sh@850 -- # return 0 00:18:25.290 21:32:50 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:25.290 21:32:50 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:25.290 21:32:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.290 21:32:50 -- common/autotest_common.sh@10 -- # set +x 00:18:25.290 21:32:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.290 21:32:50 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:25.290 21:32:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.290 21:32:50 -- common/autotest_common.sh@10 -- # set +x 00:18:25.290 21:32:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.290 21:32:50 -- host/discovery.sh@72 -- # notify_id=0 00:18:25.290 21:32:50 -- host/discovery.sh@83 -- # get_subsystem_names 00:18:25.290 21:32:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:25.290 21:32:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:25.290 21:32:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.290 21:32:50 -- host/discovery.sh@59 -- # sort 00:18:25.290 21:32:50 -- common/autotest_common.sh@10 -- # set +x 00:18:25.290 21:32:50 -- host/discovery.sh@59 -- # xargs 00:18:25.290 21:32:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.290 21:32:50 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:25.290 21:32:50 -- host/discovery.sh@84 -- # get_bdev_list 00:18:25.290 21:32:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:25.290 21:32:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:25.290 21:32:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.290 21:32:50 -- common/autotest_common.sh@10 -- # set +x 00:18:25.290 21:32:50 -- host/discovery.sh@55 -- # sort 00:18:25.290 21:32:50 -- host/discovery.sh@55 -- # xargs 00:18:25.290 21:32:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.290 21:32:50 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:25.290 21:32:50 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:25.290 21:32:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.290 21:32:50 -- common/autotest_common.sh@10 -- # set +x 00:18:25.290 21:32:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.290 21:32:50 -- host/discovery.sh@87 -- # get_subsystem_names 00:18:25.290 21:32:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:25.290 21:32:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:25.290 21:32:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.290 21:32:50 -- common/autotest_common.sh@10 -- # set +x 00:18:25.290 21:32:50 -- host/discovery.sh@59 -- # sort 00:18:25.290 21:32:50 -- host/discovery.sh@59 -- # xargs 00:18:25.290 21:32:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.548 21:32:50 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:25.548 21:32:50 -- host/discovery.sh@88 -- # get_bdev_list 00:18:25.548 21:32:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:25.548 21:32:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.548 21:32:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:25.548 21:32:50 -- common/autotest_common.sh@10 -- # set +x 00:18:25.548 21:32:50 -- host/discovery.sh@55 -- # sort 00:18:25.548 21:32:50 -- host/discovery.sh@55 -- # xargs 00:18:25.548 21:32:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.548 21:32:51 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:25.548 21:32:51 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:25.548 21:32:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.548 21:32:51 -- common/autotest_common.sh@10 -- # set +x 00:18:25.548 21:32:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.548 21:32:51 -- host/discovery.sh@91 -- # get_subsystem_names 00:18:25.548 21:32:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:25.548 21:32:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:25.548 21:32:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.548 21:32:51 -- common/autotest_common.sh@10 -- # set +x 00:18:25.548 21:32:51 -- host/discovery.sh@59 -- # sort 00:18:25.548 21:32:51 -- host/discovery.sh@59 -- # xargs 00:18:25.548 21:32:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.548 21:32:51 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:25.548 21:32:51 -- host/discovery.sh@92 -- # get_bdev_list 00:18:25.548 21:32:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:25.548 21:32:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.548 21:32:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:25.548 21:32:51 -- common/autotest_common.sh@10 -- # set +x 00:18:25.548 21:32:51 -- host/discovery.sh@55 -- # sort 00:18:25.548 21:32:51 -- host/discovery.sh@55 -- # xargs 00:18:25.548 21:32:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.548 21:32:51 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:25.548 21:32:51 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:25.548 21:32:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.548 21:32:51 -- common/autotest_common.sh@10 -- # set +x 00:18:25.548 [2024-04-24 21:32:51.138174] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.548 21:32:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.548 21:32:51 -- host/discovery.sh@97 -- # get_subsystem_names 00:18:25.548 21:32:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:25.548 21:32:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:25.548 21:32:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.548 21:32:51 -- common/autotest_common.sh@10 -- # set +x 00:18:25.548 21:32:51 -- host/discovery.sh@59 -- # sort 00:18:25.548 21:32:51 -- host/discovery.sh@59 -- # xargs 00:18:25.548 21:32:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.548 21:32:51 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:25.548 21:32:51 -- host/discovery.sh@98 -- # get_bdev_list 00:18:25.548 21:32:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:25.548 21:32:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:25.548 21:32:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.548 21:32:51 -- common/autotest_common.sh@10 -- # set +x 00:18:25.548 21:32:51 -- host/discovery.sh@55 -- # sort 00:18:25.548 21:32:51 -- host/discovery.sh@55 -- # xargs 00:18:25.548 21:32:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.548 21:32:51 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:25.548 21:32:51 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:25.548 21:32:51 -- host/discovery.sh@79 -- # expected_count=0 00:18:25.548 21:32:51 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:25.806 21:32:51 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:25.806 21:32:51 -- common/autotest_common.sh@901 -- # local max=10 00:18:25.806 21:32:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:25.806 21:32:51 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:25.806 21:32:51 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:25.806 21:32:51 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:25.806 21:32:51 -- host/discovery.sh@74 -- # jq '. | length' 00:18:25.806 21:32:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.806 21:32:51 -- common/autotest_common.sh@10 -- # set +x 00:18:25.806 21:32:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.806 21:32:51 -- host/discovery.sh@74 -- # notification_count=0 00:18:25.806 21:32:51 -- host/discovery.sh@75 -- # notify_id=0 00:18:25.806 21:32:51 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:25.806 21:32:51 -- common/autotest_common.sh@904 -- # return 0 00:18:25.806 21:32:51 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:25.806 21:32:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.806 21:32:51 -- common/autotest_common.sh@10 -- # set +x 00:18:25.806 21:32:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.806 21:32:51 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:25.806 21:32:51 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:25.806 21:32:51 -- common/autotest_common.sh@901 -- # local max=10 00:18:25.806 21:32:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:25.806 21:32:51 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:25.806 21:32:51 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:25.806 21:32:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:25.806 21:32:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:25.806 21:32:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.806 21:32:51 -- host/discovery.sh@59 -- # sort 00:18:25.806 21:32:51 -- common/autotest_common.sh@10 -- # set +x 00:18:25.806 21:32:51 -- host/discovery.sh@59 -- # xargs 00:18:25.806 21:32:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.806 21:32:51 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:18:25.806 21:32:51 -- common/autotest_common.sh@906 -- # sleep 1 00:18:26.371 [2024-04-24 21:32:51.858161] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:26.371 [2024-04-24 21:32:51.858194] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:26.371 [2024-04-24 21:32:51.858215] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:26.371 [2024-04-24 21:32:51.944498] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:26.628 [2024-04-24 21:32:52.170585] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:26.628 [2024-04-24 21:32:52.170637] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:26.888 21:32:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:26.888 21:32:52 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:26.888 21:32:52 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:26.888 21:32:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:26.888 21:32:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:26.888 21:32:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.888 21:32:52 -- common/autotest_common.sh@10 -- # set +x 00:18:26.888 21:32:52 -- host/discovery.sh@59 -- # sort 00:18:26.888 21:32:52 -- host/discovery.sh@59 -- # xargs 00:18:26.888 21:32:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.888 21:32:52 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.888 21:32:52 -- common/autotest_common.sh@904 -- # return 0 00:18:26.888 21:32:52 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:26.888 21:32:52 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:26.888 21:32:52 -- common/autotest_common.sh@901 -- # local max=10 00:18:26.888 21:32:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:26.888 21:32:52 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:26.888 21:32:52 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:26.888 21:32:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:26.888 21:32:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:26.888 21:32:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.888 21:32:52 -- common/autotest_common.sh@10 -- # set +x 00:18:26.888 21:32:52 -- host/discovery.sh@55 -- # sort 00:18:26.888 21:32:52 -- host/discovery.sh@55 -- # xargs 00:18:26.888 21:32:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.888 21:32:52 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:26.888 21:32:52 -- common/autotest_common.sh@904 -- # return 0 00:18:26.888 21:32:52 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:26.888 21:32:52 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:26.888 21:32:52 -- common/autotest_common.sh@901 -- # local max=10 00:18:26.888 21:32:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:26.888 21:32:52 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:26.888 21:32:52 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:26.888 21:32:52 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:26.888 21:32:52 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:26.888 21:32:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.888 21:32:52 -- common/autotest_common.sh@10 -- # set +x 00:18:26.888 21:32:52 -- host/discovery.sh@63 -- # sort -n 00:18:26.888 21:32:52 -- host/discovery.sh@63 -- # xargs 00:18:26.888 21:32:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.888 21:32:52 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:18:26.888 21:32:52 -- common/autotest_common.sh@904 -- # return 0 00:18:26.888 21:32:52 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:26.888 21:32:52 -- host/discovery.sh@79 -- # expected_count=1 00:18:26.888 21:32:52 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:26.888 21:32:52 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:26.888 21:32:52 -- common/autotest_common.sh@901 -- # local max=10 00:18:26.888 21:32:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:26.888 21:32:52 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:26.888 21:32:52 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:26.888 21:32:52 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:26.888 21:32:52 -- host/discovery.sh@74 -- # jq '. | length' 00:18:26.888 21:32:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.888 21:32:52 -- common/autotest_common.sh@10 -- # set +x 00:18:26.888 21:32:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.888 21:32:52 -- host/discovery.sh@74 -- # notification_count=1 00:18:26.888 21:32:52 -- host/discovery.sh@75 -- # notify_id=1 00:18:26.888 21:32:52 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:26.888 21:32:52 -- common/autotest_common.sh@904 -- # return 0 00:18:26.888 21:32:52 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:26.888 21:32:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.888 21:32:52 -- common/autotest_common.sh@10 -- # set +x 00:18:26.888 21:32:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.888 21:32:52 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:26.888 21:32:52 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:26.888 21:32:52 -- common/autotest_common.sh@901 -- # local max=10 00:18:26.888 21:32:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:26.888 21:32:52 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:26.888 21:32:52 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:26.888 21:32:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:26.888 21:32:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.888 21:32:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:26.888 21:32:52 -- common/autotest_common.sh@10 -- # set +x 00:18:26.888 21:32:52 -- host/discovery.sh@55 -- # sort 00:18:26.888 21:32:52 -- host/discovery.sh@55 -- # xargs 00:18:26.888 21:32:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.888 21:32:52 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:26.888 21:32:52 -- common/autotest_common.sh@904 -- # return 0 00:18:26.888 21:32:52 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:26.888 21:32:52 -- host/discovery.sh@79 -- # expected_count=1 00:18:26.888 21:32:52 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:26.888 21:32:52 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:26.888 21:32:52 -- common/autotest_common.sh@901 -- # local max=10 00:18:26.888 21:32:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:26.888 21:32:52 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:26.888 21:32:52 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:26.888 21:32:52 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:26.888 21:32:52 -- host/discovery.sh@74 -- # jq '. | length' 00:18:26.888 21:32:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.888 21:32:52 -- common/autotest_common.sh@10 -- # set +x 00:18:26.888 21:32:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.147 21:32:52 -- host/discovery.sh@74 -- # notification_count=1 00:18:27.147 21:32:52 -- host/discovery.sh@75 -- # notify_id=2 00:18:27.147 21:32:52 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:27.147 21:32:52 -- common/autotest_common.sh@904 -- # return 0 00:18:27.147 21:32:52 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:18:27.147 21:32:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.147 21:32:52 -- common/autotest_common.sh@10 -- # set +x 00:18:27.147 [2024-04-24 21:32:52.594350] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:27.147 [2024-04-24 21:32:52.594898] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:27.147 [2024-04-24 21:32:52.594932] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:27.147 21:32:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.147 21:32:52 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:27.147 21:32:52 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:27.147 21:32:52 -- common/autotest_common.sh@901 -- # local max=10 00:18:27.147 21:32:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:27.147 21:32:52 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:27.147 21:32:52 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:27.147 21:32:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:27.147 21:32:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:27.147 21:32:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.147 21:32:52 -- host/discovery.sh@59 -- # sort 00:18:27.147 21:32:52 -- common/autotest_common.sh@10 -- # set +x 00:18:27.147 21:32:52 -- host/discovery.sh@59 -- # xargs 00:18:27.147 21:32:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.147 21:32:52 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.147 21:32:52 -- common/autotest_common.sh@904 -- # return 0 00:18:27.147 21:32:52 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:27.147 21:32:52 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:27.147 21:32:52 -- common/autotest_common.sh@901 -- # local max=10 00:18:27.147 21:32:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:27.147 21:32:52 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:27.147 21:32:52 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:27.147 21:32:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:27.147 21:32:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:27.147 21:32:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.147 21:32:52 -- common/autotest_common.sh@10 -- # set +x 00:18:27.147 21:32:52 -- host/discovery.sh@55 -- # sort 00:18:27.147 21:32:52 -- host/discovery.sh@55 -- # xargs 00:18:27.147 21:32:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.147 [2024-04-24 21:32:52.681589] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:18:27.147 21:32:52 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:27.147 21:32:52 -- common/autotest_common.sh@904 -- # return 0 00:18:27.147 21:32:52 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:27.147 21:32:52 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:27.147 21:32:52 -- common/autotest_common.sh@901 -- # local max=10 00:18:27.147 21:32:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:27.147 21:32:52 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:27.147 21:32:52 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:27.147 21:32:52 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:27.147 21:32:52 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:27.147 21:32:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.147 21:32:52 -- common/autotest_common.sh@10 -- # set +x 00:18:27.147 21:32:52 -- host/discovery.sh@63 -- # sort -n 00:18:27.147 21:32:52 -- host/discovery.sh@63 -- # xargs 00:18:27.147 21:32:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.147 21:32:52 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:18:27.147 21:32:52 -- common/autotest_common.sh@906 -- # sleep 1 00:18:27.405 [2024-04-24 21:32:52.984049] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:27.405 [2024-04-24 21:32:52.984074] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:27.405 [2024-04-24 21:32:52.984085] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:28.339 21:32:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:28.339 21:32:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:28.339 21:32:53 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:28.339 21:32:53 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:28.339 21:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.339 21:32:53 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:28.339 21:32:53 -- common/autotest_common.sh@10 -- # set +x 00:18:28.339 21:32:53 -- host/discovery.sh@63 -- # sort -n 00:18:28.339 21:32:53 -- host/discovery.sh@63 -- # xargs 00:18:28.339 21:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.339 21:32:53 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:28.339 21:32:53 -- common/autotest_common.sh@904 -- # return 0 00:18:28.339 21:32:53 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:28.339 21:32:53 -- host/discovery.sh@79 -- # expected_count=0 00:18:28.339 21:32:53 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:28.339 21:32:53 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:28.339 21:32:53 -- common/autotest_common.sh@901 -- # local max=10 00:18:28.339 21:32:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:28.339 21:32:53 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:28.339 21:32:53 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:28.339 21:32:53 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:28.339 21:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.339 21:32:53 -- host/discovery.sh@74 -- # jq '. | length' 00:18:28.339 21:32:53 -- common/autotest_common.sh@10 -- # set +x 00:18:28.339 21:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.339 21:32:53 -- host/discovery.sh@74 -- # notification_count=0 00:18:28.339 21:32:53 -- host/discovery.sh@75 -- # notify_id=2 00:18:28.339 21:32:53 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:28.339 21:32:53 -- common/autotest_common.sh@904 -- # return 0 00:18:28.339 21:32:53 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:28.339 21:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.339 21:32:53 -- common/autotest_common.sh@10 -- # set +x 00:18:28.339 [2024-04-24 21:32:53.822740] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:28.339 [2024-04-24 21:32:53.822773] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:28.339 21:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.339 21:32:53 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:28.339 21:32:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:28.340 21:32:53 -- common/autotest_common.sh@901 -- # local max=10 00:18:28.340 21:32:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:28.340 21:32:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:28.340 21:32:53 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:28.340 21:32:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:28.340 21:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.340 21:32:53 -- common/autotest_common.sh@10 -- # set +x 00:18:28.340 21:32:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:28.340 21:32:53 -- host/discovery.sh@59 -- # sort 00:18:28.340 21:32:53 -- host/discovery.sh@59 -- # xargs 00:18:28.340 [2024-04-24 21:32:53.832011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.340 [2024-04-24 21:32:53.832046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.340 [2024-04-24 21:32:53.832084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.340 [2024-04-24 21:32:53.832099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.340 [2024-04-24 21:32:53.832112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.340 [2024-04-24 21:32:53.832140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.340 [2024-04-24 21:32:53.832155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.340 [2024-04-24 21:32:53.832169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.340 [2024-04-24 21:32:53.832184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d8a20 is same with the state(5) to be set 00:18:28.340 21:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.340 [2024-04-24 21:32:53.842014] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d8a20 (9): Bad file descriptor 00:18:28.340 [2024-04-24 21:32:53.852057] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:28.340 [2024-04-24 21:32:53.852343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.340 [2024-04-24 21:32:53.852544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.340 [2024-04-24 21:32:53.852572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d8a20 with addr=10.0.0.2, port=4420 00:18:28.340 [2024-04-24 21:32:53.852588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d8a20 is same with the state(5) to be set 00:18:28.340 [2024-04-24 21:32:53.852611] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d8a20 (9): Bad file descriptor 00:18:28.340 [2024-04-24 21:32:53.852660] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:28.340 [2024-04-24 21:32:53.852688] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:28.340 [2024-04-24 21:32:53.852704] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:28.340 [2024-04-24 21:32:53.852725] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:28.340 [2024-04-24 21:32:53.862152] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:28.340 [2024-04-24 21:32:53.862382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.340 [2024-04-24 21:32:53.862651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.340 [2024-04-24 21:32:53.862695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d8a20 with addr=10.0.0.2, port=4420 00:18:28.340 [2024-04-24 21:32:53.862712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d8a20 is same with the state(5) to be set 00:18:28.340 [2024-04-24 21:32:53.862734] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d8a20 (9): Bad file descriptor 00:18:28.340 [2024-04-24 21:32:53.862756] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:28.340 [2024-04-24 21:32:53.862770] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:28.340 [2024-04-24 21:32:53.862783] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:28.340 [2024-04-24 21:32:53.862802] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:28.340 21:32:53 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.340 21:32:53 -- common/autotest_common.sh@904 -- # return 0 00:18:28.340 21:32:53 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:28.340 21:32:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:28.340 21:32:53 -- common/autotest_common.sh@901 -- # local max=10 00:18:28.340 21:32:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:28.340 21:32:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:28.340 21:32:53 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:28.340 21:32:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:28.340 [2024-04-24 21:32:53.872229] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:28.340 21:32:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:28.340 21:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.340 [2024-04-24 21:32:53.872506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.340 21:32:53 -- common/autotest_common.sh@10 -- # set +x 00:18:28.340 [2024-04-24 21:32:53.872728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.340 [2024-04-24 21:32:53.872756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d8a20 with addr=10.0.0.2, port=4420 00:18:28.340 [2024-04-24 21:32:53.872774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d8a20 is same with the state(5) to be set 00:18:28.340 [2024-04-24 21:32:53.872797] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d8a20 (9): Bad file descriptor 00:18:28.340 [2024-04-24 21:32:53.872819] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:28.340 [2024-04-24 21:32:53.872833] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:28.340 [2024-04-24 21:32:53.872847] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:28.340 [2024-04-24 21:32:53.872866] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:28.340 21:32:53 -- host/discovery.sh@55 -- # sort 00:18:28.340 21:32:53 -- host/discovery.sh@55 -- # xargs 00:18:28.340 [2024-04-24 21:32:53.882310] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:28.340 [2024-04-24 21:32:53.882556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.340 [2024-04-24 21:32:53.882783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.340 [2024-04-24 21:32:53.882810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d8a20 with addr=10.0.0.2, port=4420 00:18:28.340 [2024-04-24 21:32:53.882826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d8a20 is same with the state(5) to be set 00:18:28.340 [2024-04-24 21:32:53.882848] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d8a20 (9): Bad file descriptor 00:18:28.340 [2024-04-24 21:32:53.882884] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:28.340 [2024-04-24 21:32:53.882902] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:28.340 [2024-04-24 21:32:53.882920] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:28.340 [2024-04-24 21:32:53.882939] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:28.340 [2024-04-24 21:32:53.892390] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:28.340 [2024-04-24 21:32:53.892594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.340 [2024-04-24 21:32:53.892829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.340 [2024-04-24 21:32:53.892855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d8a20 with addr=10.0.0.2, port=4420 00:18:28.340 [2024-04-24 21:32:53.892871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d8a20 is same with the state(5) to be set 00:18:28.340 [2024-04-24 21:32:53.892894] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d8a20 (9): Bad file descriptor 00:18:28.340 [2024-04-24 21:32:53.892937] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:28.340 [2024-04-24 21:32:53.892956] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:28.340 [2024-04-24 21:32:53.892969] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:28.340 [2024-04-24 21:32:53.892988] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:28.340 21:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.340 [2024-04-24 21:32:53.902466] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:28.340 [2024-04-24 21:32:53.902728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.340 [2024-04-24 21:32:53.902919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.340 [2024-04-24 21:32:53.902945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d8a20 with addr=10.0.0.2, port=4420 00:18:28.340 [2024-04-24 21:32:53.902977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d8a20 is same with the state(5) to be set 00:18:28.340 [2024-04-24 21:32:53.903001] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d8a20 (9): Bad file descriptor 00:18:28.340 [2024-04-24 21:32:53.903040] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:28.340 [2024-04-24 21:32:53.903060] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:28.340 [2024-04-24 21:32:53.903076] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:28.340 [2024-04-24 21:32:53.903097] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:28.340 [2024-04-24 21:32:53.909062] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:18:28.340 [2024-04-24 21:32:53.909112] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:28.340 21:32:53 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:28.340 21:32:53 -- common/autotest_common.sh@904 -- # return 0 00:18:28.340 21:32:53 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:28.340 21:32:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:28.340 21:32:53 -- common/autotest_common.sh@901 -- # local max=10 00:18:28.341 21:32:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:28.341 21:32:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:28.341 21:32:53 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:28.341 21:32:53 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:28.341 21:32:53 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:28.341 21:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.341 21:32:53 -- common/autotest_common.sh@10 -- # set +x 00:18:28.341 21:32:53 -- host/discovery.sh@63 -- # sort -n 00:18:28.341 21:32:53 -- host/discovery.sh@63 -- # xargs 00:18:28.341 21:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.341 21:32:53 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:18:28.341 21:32:53 -- common/autotest_common.sh@904 -- # return 0 00:18:28.341 21:32:53 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:28.341 21:32:53 -- host/discovery.sh@79 -- # expected_count=0 00:18:28.341 21:32:53 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:28.341 21:32:53 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:28.341 21:32:53 -- common/autotest_common.sh@901 -- # local max=10 00:18:28.341 21:32:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:28.341 21:32:53 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:28.341 21:32:53 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:28.341 21:32:53 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:28.341 21:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.341 21:32:53 -- host/discovery.sh@74 -- # jq '. | length' 00:18:28.341 21:32:53 -- common/autotest_common.sh@10 -- # set +x 00:18:28.341 21:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.341 21:32:53 -- host/discovery.sh@74 -- # notification_count=0 00:18:28.341 21:32:53 -- host/discovery.sh@75 -- # notify_id=2 00:18:28.341 21:32:53 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:28.341 21:32:53 -- common/autotest_common.sh@904 -- # return 0 00:18:28.341 21:32:53 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:28.341 21:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.341 21:32:53 -- common/autotest_common.sh@10 -- # set +x 00:18:28.341 21:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.341 21:32:54 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:28.341 21:32:54 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:28.341 21:32:54 -- common/autotest_common.sh@901 -- # local max=10 00:18:28.341 21:32:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:28.341 21:32:54 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:28.341 21:32:54 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:28.341 21:32:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:28.341 21:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.341 21:32:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:28.341 21:32:54 -- common/autotest_common.sh@10 -- # set +x 00:18:28.599 21:32:54 -- host/discovery.sh@59 -- # sort 00:18:28.599 21:32:54 -- host/discovery.sh@59 -- # xargs 00:18:28.599 21:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.599 21:32:54 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:18:28.599 21:32:54 -- common/autotest_common.sh@904 -- # return 0 00:18:28.599 21:32:54 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:28.599 21:32:54 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:28.599 21:32:54 -- common/autotest_common.sh@901 -- # local max=10 00:18:28.599 21:32:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:28.599 21:32:54 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:28.599 21:32:54 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:28.599 21:32:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:28.599 21:32:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:28.599 21:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.599 21:32:54 -- common/autotest_common.sh@10 -- # set +x 00:18:28.599 21:32:54 -- host/discovery.sh@55 -- # sort 00:18:28.599 21:32:54 -- host/discovery.sh@55 -- # xargs 00:18:28.599 21:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.599 21:32:54 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:18:28.599 21:32:54 -- common/autotest_common.sh@904 -- # return 0 00:18:28.599 21:32:54 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:28.599 21:32:54 -- host/discovery.sh@79 -- # expected_count=2 00:18:28.599 21:32:54 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:28.599 21:32:54 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:28.599 21:32:54 -- common/autotest_common.sh@901 -- # local max=10 00:18:28.599 21:32:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:28.599 21:32:54 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:28.599 21:32:54 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:28.599 21:32:54 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:28.599 21:32:54 -- host/discovery.sh@74 -- # jq '. | length' 00:18:28.599 21:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.599 21:32:54 -- common/autotest_common.sh@10 -- # set +x 00:18:28.599 21:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.599 21:32:54 -- host/discovery.sh@74 -- # notification_count=2 00:18:28.599 21:32:54 -- host/discovery.sh@75 -- # notify_id=4 00:18:28.599 21:32:54 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:28.599 21:32:54 -- common/autotest_common.sh@904 -- # return 0 00:18:28.599 21:32:54 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:28.599 21:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.599 21:32:54 -- common/autotest_common.sh@10 -- # set +x 00:18:29.530 [2024-04-24 21:32:55.197830] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:29.531 [2024-04-24 21:32:55.197850] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:29.531 [2024-04-24 21:32:55.197869] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:29.788 [2024-04-24 21:32:55.284146] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:18:30.046 [2024-04-24 21:32:55.552158] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:30.046 [2024-04-24 21:32:55.552198] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:30.046 21:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:30.046 21:32:55 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:30.046 21:32:55 -- common/autotest_common.sh@638 -- # local es=0 00:18:30.046 21:32:55 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:30.046 21:32:55 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:18:30.046 21:32:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:30.046 21:32:55 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:18:30.046 21:32:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:30.046 21:32:55 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:30.046 21:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:30.046 21:32:55 -- common/autotest_common.sh@10 -- # set +x 00:18:30.046 request: 00:18:30.046 { 00:18:30.046 "name": "nvme", 00:18:30.046 "trtype": "tcp", 00:18:30.046 "traddr": "10.0.0.2", 00:18:30.046 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:30.046 "adrfam": "ipv4", 00:18:30.046 "trsvcid": "8009", 00:18:30.046 "wait_for_attach": true, 00:18:30.046 "method": "bdev_nvme_start_discovery", 00:18:30.046 "req_id": 1 00:18:30.046 } 00:18:30.046 Got JSON-RPC error response 00:18:30.046 response: 00:18:30.046 { 00:18:30.046 "code": -17, 00:18:30.046 "message": "File exists" 00:18:30.046 } 00:18:30.046 21:32:55 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:30.046 21:32:55 -- common/autotest_common.sh@641 -- # es=1 00:18:30.046 21:32:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:30.046 21:32:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:30.046 21:32:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:30.046 21:32:55 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:30.046 21:32:55 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:30.046 21:32:55 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:30.046 21:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:30.046 21:32:55 -- common/autotest_common.sh@10 -- # set +x 00:18:30.046 21:32:55 -- host/discovery.sh@67 -- # sort 00:18:30.046 21:32:55 -- host/discovery.sh@67 -- # xargs 00:18:30.046 21:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:30.046 21:32:55 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:30.046 21:32:55 -- host/discovery.sh@146 -- # get_bdev_list 00:18:30.046 21:32:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:30.046 21:32:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:30.046 21:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:30.046 21:32:55 -- host/discovery.sh@55 -- # sort 00:18:30.046 21:32:55 -- common/autotest_common.sh@10 -- # set +x 00:18:30.046 21:32:55 -- host/discovery.sh@55 -- # xargs 00:18:30.046 21:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:30.046 21:32:55 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:30.046 21:32:55 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:30.046 21:32:55 -- common/autotest_common.sh@638 -- # local es=0 00:18:30.046 21:32:55 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:30.046 21:32:55 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:18:30.046 21:32:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:30.046 21:32:55 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:18:30.046 21:32:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:30.046 21:32:55 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:30.046 21:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:30.046 21:32:55 -- common/autotest_common.sh@10 -- # set +x 00:18:30.046 request: 00:18:30.046 { 00:18:30.046 "name": "nvme_second", 00:18:30.046 "trtype": "tcp", 00:18:30.046 "traddr": "10.0.0.2", 00:18:30.046 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:30.046 "adrfam": "ipv4", 00:18:30.046 "trsvcid": "8009", 00:18:30.046 "wait_for_attach": true, 00:18:30.046 "method": "bdev_nvme_start_discovery", 00:18:30.046 "req_id": 1 00:18:30.046 } 00:18:30.046 Got JSON-RPC error response 00:18:30.046 response: 00:18:30.046 { 00:18:30.046 "code": -17, 00:18:30.046 "message": "File exists" 00:18:30.046 } 00:18:30.046 21:32:55 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:30.046 21:32:55 -- common/autotest_common.sh@641 -- # es=1 00:18:30.046 21:32:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:30.046 21:32:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:30.046 21:32:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:30.046 21:32:55 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:30.046 21:32:55 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:30.046 21:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:30.046 21:32:55 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:30.046 21:32:55 -- common/autotest_common.sh@10 -- # set +x 00:18:30.046 21:32:55 -- host/discovery.sh@67 -- # sort 00:18:30.046 21:32:55 -- host/discovery.sh@67 -- # xargs 00:18:30.046 21:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:30.046 21:32:55 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:30.046 21:32:55 -- host/discovery.sh@152 -- # get_bdev_list 00:18:30.046 21:32:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:30.046 21:32:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:30.046 21:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:30.046 21:32:55 -- common/autotest_common.sh@10 -- # set +x 00:18:30.046 21:32:55 -- host/discovery.sh@55 -- # sort 00:18:30.046 21:32:55 -- host/discovery.sh@55 -- # xargs 00:18:30.046 21:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:30.303 21:32:55 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:30.303 21:32:55 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:30.303 21:32:55 -- common/autotest_common.sh@638 -- # local es=0 00:18:30.304 21:32:55 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:30.304 21:32:55 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:18:30.304 21:32:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:30.304 21:32:55 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:18:30.304 21:32:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:30.304 21:32:55 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:30.304 21:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:30.304 21:32:55 -- common/autotest_common.sh@10 -- # set +x 00:18:31.236 [2024-04-24 21:32:56.751736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:31.236 [2024-04-24 21:32:56.751950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:31.236 [2024-04-24 21:32:56.751977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x815bd0 with addr=10.0.0.2, port=8010 00:18:31.236 [2024-04-24 21:32:56.752000] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:31.236 [2024-04-24 21:32:56.752015] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:31.236 [2024-04-24 21:32:56.752028] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:32.168 [2024-04-24 21:32:57.754116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:32.168 [2024-04-24 21:32:57.754341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:32.168 [2024-04-24 21:32:57.754366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x815bd0 with addr=10.0.0.2, port=8010 00:18:32.168 [2024-04-24 21:32:57.754393] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:32.168 [2024-04-24 21:32:57.754406] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:32.168 [2024-04-24 21:32:57.754418] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:33.102 [2024-04-24 21:32:58.756344] bdev_nvme.c:6949:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:18:33.102 request: 00:18:33.102 { 00:18:33.102 "name": "nvme_second", 00:18:33.102 "trtype": "tcp", 00:18:33.102 "traddr": "10.0.0.2", 00:18:33.102 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:33.102 "adrfam": "ipv4", 00:18:33.102 "trsvcid": "8010", 00:18:33.102 "attach_timeout_ms": 3000, 00:18:33.102 "method": "bdev_nvme_start_discovery", 00:18:33.102 "req_id": 1 00:18:33.102 } 00:18:33.102 Got JSON-RPC error response 00:18:33.102 response: 00:18:33.102 { 00:18:33.102 "code": -110, 00:18:33.102 "message": "Connection timed out" 00:18:33.102 } 00:18:33.102 21:32:58 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:33.102 21:32:58 -- common/autotest_common.sh@641 -- # es=1 00:18:33.102 21:32:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:33.102 21:32:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:33.102 21:32:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:33.102 21:32:58 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:33.102 21:32:58 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:33.102 21:32:58 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:33.102 21:32:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:33.102 21:32:58 -- common/autotest_common.sh@10 -- # set +x 00:18:33.102 21:32:58 -- host/discovery.sh@67 -- # sort 00:18:33.102 21:32:58 -- host/discovery.sh@67 -- # xargs 00:18:33.102 21:32:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:33.360 21:32:58 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:33.360 21:32:58 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:33.360 21:32:58 -- host/discovery.sh@161 -- # kill 2650863 00:18:33.360 21:32:58 -- host/discovery.sh@162 -- # nvmftestfini 00:18:33.360 21:32:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:33.360 21:32:58 -- nvmf/common.sh@117 -- # sync 00:18:33.360 21:32:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:33.360 21:32:58 -- nvmf/common.sh@120 -- # set +e 00:18:33.360 21:32:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:33.360 21:32:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:33.360 rmmod nvme_tcp 00:18:33.360 rmmod nvme_fabrics 00:18:33.360 rmmod nvme_keyring 00:18:33.360 21:32:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:33.360 21:32:58 -- nvmf/common.sh@124 -- # set -e 00:18:33.360 21:32:58 -- nvmf/common.sh@125 -- # return 0 00:18:33.360 21:32:58 -- nvmf/common.sh@478 -- # '[' -n 2650711 ']' 00:18:33.360 21:32:58 -- nvmf/common.sh@479 -- # killprocess 2650711 00:18:33.360 21:32:58 -- common/autotest_common.sh@936 -- # '[' -z 2650711 ']' 00:18:33.360 21:32:58 -- common/autotest_common.sh@940 -- # kill -0 2650711 00:18:33.360 21:32:58 -- common/autotest_common.sh@941 -- # uname 00:18:33.360 21:32:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:33.360 21:32:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2650711 00:18:33.360 21:32:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:33.360 21:32:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:33.360 21:32:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2650711' 00:18:33.360 killing process with pid 2650711 00:18:33.360 21:32:58 -- common/autotest_common.sh@955 -- # kill 2650711 00:18:33.360 21:32:58 -- common/autotest_common.sh@960 -- # wait 2650711 00:18:33.619 21:32:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:33.619 21:32:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:33.619 21:32:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:33.619 21:32:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:33.619 21:32:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:33.619 21:32:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.619 21:32:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.619 21:32:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.152 21:33:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:36.152 00:18:36.152 real 0m14.043s 00:18:36.152 user 0m20.341s 00:18:36.152 sys 0m2.818s 00:18:36.152 21:33:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:36.152 21:33:01 -- common/autotest_common.sh@10 -- # set +x 00:18:36.152 ************************************ 00:18:36.152 END TEST nvmf_discovery 00:18:36.152 ************************************ 00:18:36.152 21:33:01 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:36.152 21:33:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:36.152 21:33:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:36.152 21:33:01 -- common/autotest_common.sh@10 -- # set +x 00:18:36.152 ************************************ 00:18:36.152 START TEST nvmf_discovery_remove_ifc 00:18:36.152 ************************************ 00:18:36.152 21:33:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:36.152 * Looking for test storage... 00:18:36.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:36.152 21:33:01 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:36.152 21:33:01 -- nvmf/common.sh@7 -- # uname -s 00:18:36.152 21:33:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.152 21:33:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.152 21:33:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.152 21:33:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.152 21:33:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.152 21:33:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.152 21:33:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.152 21:33:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.152 21:33:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.152 21:33:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.152 21:33:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:36.152 21:33:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:36.152 21:33:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.152 21:33:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.152 21:33:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:36.152 21:33:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:36.152 21:33:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:36.152 21:33:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.152 21:33:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.152 21:33:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.152 21:33:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.152 21:33:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.152 21:33:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.152 21:33:01 -- paths/export.sh@5 -- # export PATH 00:18:36.152 21:33:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.152 21:33:01 -- nvmf/common.sh@47 -- # : 0 00:18:36.152 21:33:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:36.152 21:33:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:36.152 21:33:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:36.152 21:33:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.152 21:33:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.152 21:33:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:36.152 21:33:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:36.152 21:33:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:36.152 21:33:01 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:36.152 21:33:01 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:36.152 21:33:01 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:36.152 21:33:01 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:36.152 21:33:01 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:36.152 21:33:01 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:36.152 21:33:01 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:36.152 21:33:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:36.152 21:33:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:36.152 21:33:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:36.152 21:33:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:36.152 21:33:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:36.152 21:33:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.152 21:33:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.152 21:33:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.152 21:33:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:36.152 21:33:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:36.152 21:33:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:36.152 21:33:01 -- common/autotest_common.sh@10 -- # set +x 00:18:38.055 21:33:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:38.055 21:33:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:38.055 21:33:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:38.055 21:33:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:38.056 21:33:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:38.056 21:33:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:38.056 21:33:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:38.056 21:33:03 -- nvmf/common.sh@295 -- # net_devs=() 00:18:38.056 21:33:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:38.056 21:33:03 -- nvmf/common.sh@296 -- # e810=() 00:18:38.056 21:33:03 -- nvmf/common.sh@296 -- # local -ga e810 00:18:38.056 21:33:03 -- nvmf/common.sh@297 -- # x722=() 00:18:38.056 21:33:03 -- nvmf/common.sh@297 -- # local -ga x722 00:18:38.056 21:33:03 -- nvmf/common.sh@298 -- # mlx=() 00:18:38.056 21:33:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:38.056 21:33:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:38.056 21:33:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:38.056 21:33:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:38.056 21:33:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:38.056 21:33:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:38.056 21:33:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:38.056 21:33:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:38.056 21:33:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:38.056 21:33:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:38.056 21:33:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:38.056 21:33:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:38.056 21:33:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:38.056 21:33:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:38.056 21:33:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:38.056 21:33:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:38.056 21:33:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:38.056 21:33:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:38.056 21:33:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:38.056 21:33:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:38.056 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:38.056 21:33:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:38.056 21:33:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:38.056 21:33:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.056 21:33:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.056 21:33:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:38.056 21:33:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:38.056 21:33:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:38.056 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:38.056 21:33:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:38.056 21:33:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:38.056 21:33:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.056 21:33:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.056 21:33:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:38.056 21:33:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:38.056 21:33:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:38.056 21:33:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:38.056 21:33:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:38.056 21:33:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.056 21:33:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:38.056 21:33:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.056 21:33:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:38.056 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:38.056 21:33:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.056 21:33:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:38.056 21:33:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.056 21:33:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:38.056 21:33:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.056 21:33:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:38.056 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:38.056 21:33:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.056 21:33:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:38.056 21:33:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:38.056 21:33:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:38.056 21:33:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:38.056 21:33:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:38.056 21:33:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:38.056 21:33:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:38.056 21:33:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:38.056 21:33:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:38.056 21:33:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:38.056 21:33:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:38.056 21:33:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:38.056 21:33:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:38.056 21:33:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:38.056 21:33:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:38.056 21:33:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:38.056 21:33:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:38.056 21:33:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:38.056 21:33:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:38.056 21:33:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:38.056 21:33:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:38.056 21:33:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:38.056 21:33:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:38.056 21:33:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:38.056 21:33:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:38.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:38.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:18:38.056 00:18:38.056 --- 10.0.0.2 ping statistics --- 00:18:38.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.056 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:18:38.056 21:33:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:38.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:38.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:18:38.056 00:18:38.056 --- 10.0.0.1 ping statistics --- 00:18:38.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.056 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:18:38.056 21:33:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:38.056 21:33:03 -- nvmf/common.sh@411 -- # return 0 00:18:38.056 21:33:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:38.056 21:33:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:38.056 21:33:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:38.056 21:33:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:38.056 21:33:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:38.056 21:33:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:38.056 21:33:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:38.056 21:33:03 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:38.056 21:33:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:38.056 21:33:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:38.056 21:33:03 -- common/autotest_common.sh@10 -- # set +x 00:18:38.056 21:33:03 -- nvmf/common.sh@470 -- # nvmfpid=2654025 00:18:38.056 21:33:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:38.056 21:33:03 -- nvmf/common.sh@471 -- # waitforlisten 2654025 00:18:38.056 21:33:03 -- common/autotest_common.sh@817 -- # '[' -z 2654025 ']' 00:18:38.056 21:33:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.056 21:33:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:38.056 21:33:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.056 21:33:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:38.056 21:33:03 -- common/autotest_common.sh@10 -- # set +x 00:18:38.056 [2024-04-24 21:33:03.515053] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:18:38.056 [2024-04-24 21:33:03.515132] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.056 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.056 [2024-04-24 21:33:03.583247] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.056 [2024-04-24 21:33:03.687074] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.056 [2024-04-24 21:33:03.687129] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.056 [2024-04-24 21:33:03.687157] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.056 [2024-04-24 21:33:03.687168] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.056 [2024-04-24 21:33:03.687178] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.056 [2024-04-24 21:33:03.687223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.315 21:33:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:38.315 21:33:03 -- common/autotest_common.sh@850 -- # return 0 00:18:38.315 21:33:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:38.315 21:33:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:38.315 21:33:03 -- common/autotest_common.sh@10 -- # set +x 00:18:38.315 21:33:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.315 21:33:03 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:38.315 21:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.315 21:33:03 -- common/autotest_common.sh@10 -- # set +x 00:18:38.315 [2024-04-24 21:33:03.830938] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.315 [2024-04-24 21:33:03.839124] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:38.315 null0 00:18:38.315 [2024-04-24 21:33:03.871066] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.315 21:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.315 21:33:03 -- host/discovery_remove_ifc.sh@59 -- # hostpid=2654161 00:18:38.315 21:33:03 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:38.315 21:33:03 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2654161 /tmp/host.sock 00:18:38.315 21:33:03 -- common/autotest_common.sh@817 -- # '[' -z 2654161 ']' 00:18:38.315 21:33:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:18:38.315 21:33:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:38.315 21:33:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:38.315 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:38.315 21:33:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:38.315 21:33:03 -- common/autotest_common.sh@10 -- # set +x 00:18:38.315 [2024-04-24 21:33:03.934211] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:18:38.315 [2024-04-24 21:33:03.934292] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2654161 ] 00:18:38.315 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.574 [2024-04-24 21:33:03.995578] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.574 [2024-04-24 21:33:04.108994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.508 21:33:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:39.508 21:33:04 -- common/autotest_common.sh@850 -- # return 0 00:18:39.508 21:33:04 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:39.508 21:33:04 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:39.508 21:33:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:39.508 21:33:04 -- common/autotest_common.sh@10 -- # set +x 00:18:39.508 21:33:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:39.508 21:33:04 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:39.508 21:33:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:39.508 21:33:04 -- common/autotest_common.sh@10 -- # set +x 00:18:39.508 21:33:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:39.508 21:33:05 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:39.508 21:33:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:39.508 21:33:05 -- common/autotest_common.sh@10 -- # set +x 00:18:40.443 [2024-04-24 21:33:06.081874] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:40.443 [2024-04-24 21:33:06.081927] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:40.443 [2024-04-24 21:33:06.081953] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:40.701 [2024-04-24 21:33:06.169244] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:40.701 [2024-04-24 21:33:06.271038] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:40.701 [2024-04-24 21:33:06.271103] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:40.701 [2024-04-24 21:33:06.271150] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:40.701 [2024-04-24 21:33:06.271175] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:40.701 [2024-04-24 21:33:06.271214] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:40.701 21:33:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:40.701 21:33:06 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:40.701 21:33:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:40.701 21:33:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:40.701 21:33:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:40.701 21:33:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:40.701 21:33:06 -- common/autotest_common.sh@10 -- # set +x 00:18:40.701 21:33:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:40.701 21:33:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:40.701 [2024-04-24 21:33:06.280010] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c1b280 was disconnected and freed. delete nvme_qpair. 00:18:40.701 21:33:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:40.701 21:33:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:40.701 21:33:06 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:18:40.701 21:33:06 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:18:40.701 21:33:06 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:40.701 21:33:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:40.701 21:33:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:40.701 21:33:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:40.701 21:33:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:40.701 21:33:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:40.701 21:33:06 -- common/autotest_common.sh@10 -- # set +x 00:18:40.701 21:33:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:40.959 21:33:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:40.959 21:33:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:40.959 21:33:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:41.893 21:33:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:41.893 21:33:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:41.893 21:33:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:41.893 21:33:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:41.893 21:33:07 -- common/autotest_common.sh@10 -- # set +x 00:18:41.893 21:33:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:41.893 21:33:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:41.893 21:33:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:41.893 21:33:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:41.893 21:33:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:42.825 21:33:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:42.825 21:33:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:42.825 21:33:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:42.825 21:33:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.825 21:33:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:42.825 21:33:08 -- common/autotest_common.sh@10 -- # set +x 00:18:42.825 21:33:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:42.825 21:33:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.825 21:33:08 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:42.825 21:33:08 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:44.199 21:33:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:44.199 21:33:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:44.199 21:33:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:44.199 21:33:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:44.199 21:33:09 -- common/autotest_common.sh@10 -- # set +x 00:18:44.199 21:33:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:44.199 21:33:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:44.199 21:33:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:44.199 21:33:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:44.199 21:33:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:45.132 21:33:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:45.132 21:33:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:45.132 21:33:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:45.132 21:33:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.132 21:33:10 -- common/autotest_common.sh@10 -- # set +x 00:18:45.132 21:33:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:45.132 21:33:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:45.132 21:33:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.132 21:33:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:45.132 21:33:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:46.065 21:33:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:46.065 21:33:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:46.065 21:33:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:46.065 21:33:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.065 21:33:11 -- common/autotest_common.sh@10 -- # set +x 00:18:46.065 21:33:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:46.065 21:33:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:46.065 21:33:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.065 21:33:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:46.065 21:33:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:46.065 [2024-04-24 21:33:11.711990] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:18:46.065 [2024-04-24 21:33:11.712069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.065 [2024-04-24 21:33:11.712100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.065 [2024-04-24 21:33:11.712121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.065 [2024-04-24 21:33:11.712136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.065 [2024-04-24 21:33:11.712151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.065 [2024-04-24 21:33:11.712165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.065 [2024-04-24 21:33:11.712180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.065 [2024-04-24 21:33:11.712195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.065 [2024-04-24 21:33:11.712210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.065 [2024-04-24 21:33:11.712225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.065 [2024-04-24 21:33:11.712239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be17a0 is same with the state(5) to be set 00:18:46.065 [2024-04-24 21:33:11.722012] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be17a0 (9): Bad file descriptor 00:18:46.065 [2024-04-24 21:33:11.732058] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:46.999 21:33:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:46.999 21:33:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:46.999 21:33:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.999 21:33:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:46.999 21:33:12 -- common/autotest_common.sh@10 -- # set +x 00:18:46.999 21:33:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:46.999 21:33:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:47.257 [2024-04-24 21:33:12.781672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:18:48.191 [2024-04-24 21:33:13.805700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:18:48.191 [2024-04-24 21:33:13.805753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be17a0 with addr=10.0.0.2, port=4420 00:18:48.191 [2024-04-24 21:33:13.805779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be17a0 is same with the state(5) to be set 00:18:48.191 [2024-04-24 21:33:13.806273] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be17a0 (9): Bad file descriptor 00:18:48.191 [2024-04-24 21:33:13.806321] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:48.191 [2024-04-24 21:33:13.806375] bdev_nvme.c:6657:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:18:48.191 [2024-04-24 21:33:13.806427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:48.191 [2024-04-24 21:33:13.806449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.191 [2024-04-24 21:33:13.806467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:48.191 [2024-04-24 21:33:13.806480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.191 [2024-04-24 21:33:13.806494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:48.191 [2024-04-24 21:33:13.806514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.191 [2024-04-24 21:33:13.806528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:48.191 [2024-04-24 21:33:13.806541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.191 [2024-04-24 21:33:13.806554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:48.191 [2024-04-24 21:33:13.806566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.191 [2024-04-24 21:33:13.806579] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:18:48.191 [2024-04-24 21:33:13.806814] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1bb0 (9): Bad file descriptor 00:18:48.191 [2024-04-24 21:33:13.807833] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:18:48.191 [2024-04-24 21:33:13.807855] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:18:48.191 21:33:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.191 21:33:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:48.191 21:33:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:49.570 21:33:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:49.570 21:33:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:49.570 21:33:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:49.570 21:33:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:49.570 21:33:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.570 21:33:14 -- common/autotest_common.sh@10 -- # set +x 00:18:49.570 21:33:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:49.570 21:33:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.570 21:33:14 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:18:49.570 21:33:14 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:49.570 21:33:14 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:49.570 21:33:14 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:18:49.570 21:33:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:49.570 21:33:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:49.570 21:33:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.570 21:33:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:49.570 21:33:14 -- common/autotest_common.sh@10 -- # set +x 00:18:49.570 21:33:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:49.570 21:33:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:49.570 21:33:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.570 21:33:14 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:49.570 21:33:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:50.505 [2024-04-24 21:33:15.827068] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:50.505 [2024-04-24 21:33:15.827101] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:50.505 [2024-04-24 21:33:15.827122] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:50.505 [2024-04-24 21:33:15.913391] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:18:50.505 21:33:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:50.505 21:33:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:50.505 21:33:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.505 21:33:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:50.505 21:33:15 -- common/autotest_common.sh@10 -- # set +x 00:18:50.505 21:33:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:50.505 21:33:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:50.505 21:33:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.505 [2024-04-24 21:33:15.976205] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:50.505 [2024-04-24 21:33:15.976251] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:50.505 [2024-04-24 21:33:15.976284] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:50.505 [2024-04-24 21:33:15.976305] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:18:50.505 [2024-04-24 21:33:15.976318] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:50.505 [2024-04-24 21:33:15.985124] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c257c0 was disconnected and freed. delete nvme_qpair. 00:18:50.505 21:33:15 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:50.505 21:33:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:51.440 21:33:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:51.440 21:33:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:51.440 21:33:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.440 21:33:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:51.440 21:33:17 -- common/autotest_common.sh@10 -- # set +x 00:18:51.440 21:33:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:51.440 21:33:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:51.440 21:33:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.440 21:33:17 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:18:51.440 21:33:17 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:18:51.440 21:33:17 -- host/discovery_remove_ifc.sh@90 -- # killprocess 2654161 00:18:51.440 21:33:17 -- common/autotest_common.sh@936 -- # '[' -z 2654161 ']' 00:18:51.440 21:33:17 -- common/autotest_common.sh@940 -- # kill -0 2654161 00:18:51.440 21:33:17 -- common/autotest_common.sh@941 -- # uname 00:18:51.440 21:33:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:51.440 21:33:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2654161 00:18:51.440 21:33:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:51.440 21:33:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:51.440 21:33:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2654161' 00:18:51.440 killing process with pid 2654161 00:18:51.440 21:33:17 -- common/autotest_common.sh@955 -- # kill 2654161 00:18:51.440 21:33:17 -- common/autotest_common.sh@960 -- # wait 2654161 00:18:51.699 21:33:17 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:18:51.699 21:33:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:51.699 21:33:17 -- nvmf/common.sh@117 -- # sync 00:18:51.699 21:33:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:51.699 21:33:17 -- nvmf/common.sh@120 -- # set +e 00:18:51.699 21:33:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:51.699 21:33:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:51.699 rmmod nvme_tcp 00:18:51.699 rmmod nvme_fabrics 00:18:51.699 rmmod nvme_keyring 00:18:51.957 21:33:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:51.957 21:33:17 -- nvmf/common.sh@124 -- # set -e 00:18:51.957 21:33:17 -- nvmf/common.sh@125 -- # return 0 00:18:51.957 21:33:17 -- nvmf/common.sh@478 -- # '[' -n 2654025 ']' 00:18:51.957 21:33:17 -- nvmf/common.sh@479 -- # killprocess 2654025 00:18:51.957 21:33:17 -- common/autotest_common.sh@936 -- # '[' -z 2654025 ']' 00:18:51.957 21:33:17 -- common/autotest_common.sh@940 -- # kill -0 2654025 00:18:51.957 21:33:17 -- common/autotest_common.sh@941 -- # uname 00:18:51.957 21:33:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:51.957 21:33:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2654025 00:18:51.957 21:33:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:51.957 21:33:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:51.957 21:33:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2654025' 00:18:51.957 killing process with pid 2654025 00:18:51.957 21:33:17 -- common/autotest_common.sh@955 -- # kill 2654025 00:18:51.957 21:33:17 -- common/autotest_common.sh@960 -- # wait 2654025 00:18:52.216 21:33:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:52.216 21:33:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:52.216 21:33:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:52.216 21:33:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:52.216 21:33:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:52.216 21:33:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.216 21:33:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:52.216 21:33:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.119 21:33:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:54.119 00:18:54.119 real 0m18.410s 00:18:54.119 user 0m26.251s 00:18:54.119 sys 0m3.052s 00:18:54.119 21:33:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:54.119 21:33:19 -- common/autotest_common.sh@10 -- # set +x 00:18:54.119 ************************************ 00:18:54.119 END TEST nvmf_discovery_remove_ifc 00:18:54.119 ************************************ 00:18:54.119 21:33:19 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:54.119 21:33:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:54.119 21:33:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:54.119 21:33:19 -- common/autotest_common.sh@10 -- # set +x 00:18:54.378 ************************************ 00:18:54.378 START TEST nvmf_identify_kernel_target 00:18:54.378 ************************************ 00:18:54.378 21:33:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:54.378 * Looking for test storage... 00:18:54.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:54.378 21:33:19 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:54.378 21:33:19 -- nvmf/common.sh@7 -- # uname -s 00:18:54.378 21:33:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:54.378 21:33:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:54.378 21:33:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:54.378 21:33:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:54.378 21:33:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:54.378 21:33:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:54.378 21:33:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:54.378 21:33:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:54.378 21:33:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:54.378 21:33:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:54.378 21:33:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.378 21:33:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.378 21:33:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:54.378 21:33:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:54.378 21:33:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:54.378 21:33:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:54.378 21:33:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:54.378 21:33:19 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:54.378 21:33:19 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:54.378 21:33:19 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:54.378 21:33:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.378 21:33:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.378 21:33:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.378 21:33:19 -- paths/export.sh@5 -- # export PATH 00:18:54.378 21:33:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.378 21:33:19 -- nvmf/common.sh@47 -- # : 0 00:18:54.378 21:33:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:54.378 21:33:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:54.378 21:33:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:54.378 21:33:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.378 21:33:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.378 21:33:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:54.378 21:33:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:54.378 21:33:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:54.378 21:33:19 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:54.378 21:33:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:54.378 21:33:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.378 21:33:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:54.378 21:33:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:54.378 21:33:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:54.378 21:33:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.378 21:33:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:54.378 21:33:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.378 21:33:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:54.378 21:33:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:54.378 21:33:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:54.378 21:33:19 -- common/autotest_common.sh@10 -- # set +x 00:18:56.288 21:33:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:56.288 21:33:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:56.288 21:33:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:56.288 21:33:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:56.288 21:33:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:56.288 21:33:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:56.288 21:33:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:56.288 21:33:21 -- nvmf/common.sh@295 -- # net_devs=() 00:18:56.288 21:33:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:56.288 21:33:21 -- nvmf/common.sh@296 -- # e810=() 00:18:56.288 21:33:21 -- nvmf/common.sh@296 -- # local -ga e810 00:18:56.288 21:33:21 -- nvmf/common.sh@297 -- # x722=() 00:18:56.288 21:33:21 -- nvmf/common.sh@297 -- # local -ga x722 00:18:56.288 21:33:21 -- nvmf/common.sh@298 -- # mlx=() 00:18:56.288 21:33:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:56.288 21:33:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:56.288 21:33:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:56.288 21:33:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:56.288 21:33:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:56.288 21:33:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:56.288 21:33:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:56.288 21:33:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:56.288 21:33:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:56.288 21:33:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:56.288 21:33:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:56.288 21:33:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:56.288 21:33:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:56.288 21:33:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:56.288 21:33:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:56.288 21:33:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:56.288 21:33:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:56.288 21:33:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:56.288 21:33:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:56.288 21:33:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:56.288 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:56.288 21:33:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:56.288 21:33:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:56.288 21:33:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.288 21:33:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.288 21:33:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:56.288 21:33:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:56.288 21:33:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:56.288 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:56.288 21:33:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:56.288 21:33:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:56.288 21:33:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.288 21:33:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.288 21:33:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:56.288 21:33:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:56.288 21:33:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:56.288 21:33:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:56.288 21:33:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:56.288 21:33:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.288 21:33:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:56.288 21:33:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.288 21:33:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:56.288 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:56.288 21:33:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.288 21:33:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:56.288 21:33:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.288 21:33:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:56.288 21:33:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.288 21:33:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:56.288 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:56.288 21:33:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.288 21:33:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:56.288 21:33:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:56.288 21:33:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:56.288 21:33:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:56.288 21:33:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:56.288 21:33:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:56.288 21:33:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:56.288 21:33:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:56.288 21:33:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:56.288 21:33:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:56.288 21:33:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:56.288 21:33:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:56.288 21:33:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:56.288 21:33:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:56.288 21:33:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:56.288 21:33:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:56.288 21:33:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:56.288 21:33:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:56.547 21:33:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:56.547 21:33:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:56.547 21:33:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:56.547 21:33:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:56.547 21:33:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:56.547 21:33:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:56.547 21:33:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:56.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:56.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:18:56.547 00:18:56.547 --- 10.0.0.2 ping statistics --- 00:18:56.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.547 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:18:56.547 21:33:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:56.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:56.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:18:56.547 00:18:56.547 --- 10.0.0.1 ping statistics --- 00:18:56.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.547 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:18:56.547 21:33:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:56.547 21:33:22 -- nvmf/common.sh@411 -- # return 0 00:18:56.547 21:33:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:56.547 21:33:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:56.547 21:33:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:56.547 21:33:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:56.547 21:33:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:56.547 21:33:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:56.547 21:33:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:56.547 21:33:22 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:56.547 21:33:22 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:56.547 21:33:22 -- nvmf/common.sh@717 -- # local ip 00:18:56.547 21:33:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:56.547 21:33:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:56.547 21:33:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.547 21:33:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.547 21:33:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:56.548 21:33:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.548 21:33:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:56.548 21:33:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:56.548 21:33:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:56.548 21:33:22 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:18:56.548 21:33:22 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:18:56.548 21:33:22 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:18:56.548 21:33:22 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:18:56.548 21:33:22 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:56.548 21:33:22 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:56.548 21:33:22 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:56.548 21:33:22 -- nvmf/common.sh@628 -- # local block nvme 00:18:56.548 21:33:22 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:18:56.548 21:33:22 -- nvmf/common.sh@631 -- # modprobe nvmet 00:18:56.548 21:33:22 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:56.548 21:33:22 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:18:57.482 Waiting for block devices as requested 00:18:57.741 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:18:57.741 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:18:58.000 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:18:58.000 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:18:58.000 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:18:58.000 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:18:58.259 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:18:58.259 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:18:58.259 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:18:58.259 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:18:58.516 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:18:58.516 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:18:58.516 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:18:58.516 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:18:58.774 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:18:58.774 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:18:58.774 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:18:59.032 21:33:24 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:18:59.032 21:33:24 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:59.032 21:33:24 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:18:59.032 21:33:24 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:18:59.032 21:33:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:59.032 21:33:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:59.032 21:33:24 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:18:59.032 21:33:24 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:59.032 21:33:24 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:18:59.032 No valid GPT data, bailing 00:18:59.032 21:33:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:59.032 21:33:24 -- scripts/common.sh@391 -- # pt= 00:18:59.032 21:33:24 -- scripts/common.sh@392 -- # return 1 00:18:59.032 21:33:24 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:18:59.032 21:33:24 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:18:59.032 21:33:24 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:59.032 21:33:24 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:59.032 21:33:24 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:59.032 21:33:24 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:59.032 21:33:24 -- nvmf/common.sh@656 -- # echo 1 00:18:59.032 21:33:24 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:18:59.032 21:33:24 -- nvmf/common.sh@658 -- # echo 1 00:18:59.032 21:33:24 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:18:59.032 21:33:24 -- nvmf/common.sh@661 -- # echo tcp 00:18:59.032 21:33:24 -- nvmf/common.sh@662 -- # echo 4420 00:18:59.032 21:33:24 -- nvmf/common.sh@663 -- # echo ipv4 00:18:59.032 21:33:24 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:59.032 21:33:24 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:18:59.032 00:18:59.032 Discovery Log Number of Records 2, Generation counter 2 00:18:59.032 =====Discovery Log Entry 0====== 00:18:59.032 trtype: tcp 00:18:59.032 adrfam: ipv4 00:18:59.032 subtype: current discovery subsystem 00:18:59.032 treq: not specified, sq flow control disable supported 00:18:59.032 portid: 1 00:18:59.032 trsvcid: 4420 00:18:59.032 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:59.032 traddr: 10.0.0.1 00:18:59.032 eflags: none 00:18:59.032 sectype: none 00:18:59.032 =====Discovery Log Entry 1====== 00:18:59.032 trtype: tcp 00:18:59.032 adrfam: ipv4 00:18:59.032 subtype: nvme subsystem 00:18:59.032 treq: not specified, sq flow control disable supported 00:18:59.032 portid: 1 00:18:59.032 trsvcid: 4420 00:18:59.032 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:59.032 traddr: 10.0.0.1 00:18:59.032 eflags: none 00:18:59.032 sectype: none 00:18:59.032 21:33:24 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:18:59.032 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:59.032 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.292 ===================================================== 00:18:59.292 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:59.292 ===================================================== 00:18:59.292 Controller Capabilities/Features 00:18:59.292 ================================ 00:18:59.292 Vendor ID: 0000 00:18:59.292 Subsystem Vendor ID: 0000 00:18:59.292 Serial Number: fe99ae9cd9effa984d8d 00:18:59.292 Model Number: Linux 00:18:59.292 Firmware Version: 6.7.0-68 00:18:59.292 Recommended Arb Burst: 0 00:18:59.292 IEEE OUI Identifier: 00 00 00 00:18:59.292 Multi-path I/O 00:18:59.292 May have multiple subsystem ports: No 00:18:59.292 May have multiple controllers: No 00:18:59.292 Associated with SR-IOV VF: No 00:18:59.292 Max Data Transfer Size: Unlimited 00:18:59.292 Max Number of Namespaces: 0 00:18:59.292 Max Number of I/O Queues: 1024 00:18:59.292 NVMe Specification Version (VS): 1.3 00:18:59.292 NVMe Specification Version (Identify): 1.3 00:18:59.292 Maximum Queue Entries: 1024 00:18:59.292 Contiguous Queues Required: No 00:18:59.292 Arbitration Mechanisms Supported 00:18:59.292 Weighted Round Robin: Not Supported 00:18:59.292 Vendor Specific: Not Supported 00:18:59.292 Reset Timeout: 7500 ms 00:18:59.292 Doorbell Stride: 4 bytes 00:18:59.292 NVM Subsystem Reset: Not Supported 00:18:59.292 Command Sets Supported 00:18:59.292 NVM Command Set: Supported 00:18:59.292 Boot Partition: Not Supported 00:18:59.292 Memory Page Size Minimum: 4096 bytes 00:18:59.292 Memory Page Size Maximum: 4096 bytes 00:18:59.292 Persistent Memory Region: Not Supported 00:18:59.292 Optional Asynchronous Events Supported 00:18:59.292 Namespace Attribute Notices: Not Supported 00:18:59.292 Firmware Activation Notices: Not Supported 00:18:59.292 ANA Change Notices: Not Supported 00:18:59.292 PLE Aggregate Log Change Notices: Not Supported 00:18:59.292 LBA Status Info Alert Notices: Not Supported 00:18:59.292 EGE Aggregate Log Change Notices: Not Supported 00:18:59.292 Normal NVM Subsystem Shutdown event: Not Supported 00:18:59.292 Zone Descriptor Change Notices: Not Supported 00:18:59.292 Discovery Log Change Notices: Supported 00:18:59.292 Controller Attributes 00:18:59.292 128-bit Host Identifier: Not Supported 00:18:59.292 Non-Operational Permissive Mode: Not Supported 00:18:59.292 NVM Sets: Not Supported 00:18:59.292 Read Recovery Levels: Not Supported 00:18:59.292 Endurance Groups: Not Supported 00:18:59.292 Predictable Latency Mode: Not Supported 00:18:59.292 Traffic Based Keep ALive: Not Supported 00:18:59.292 Namespace Granularity: Not Supported 00:18:59.292 SQ Associations: Not Supported 00:18:59.292 UUID List: Not Supported 00:18:59.292 Multi-Domain Subsystem: Not Supported 00:18:59.292 Fixed Capacity Management: Not Supported 00:18:59.292 Variable Capacity Management: Not Supported 00:18:59.292 Delete Endurance Group: Not Supported 00:18:59.292 Delete NVM Set: Not Supported 00:18:59.292 Extended LBA Formats Supported: Not Supported 00:18:59.292 Flexible Data Placement Supported: Not Supported 00:18:59.292 00:18:59.292 Controller Memory Buffer Support 00:18:59.292 ================================ 00:18:59.292 Supported: No 00:18:59.292 00:18:59.292 Persistent Memory Region Support 00:18:59.292 ================================ 00:18:59.292 Supported: No 00:18:59.292 00:18:59.292 Admin Command Set Attributes 00:18:59.292 ============================ 00:18:59.292 Security Send/Receive: Not Supported 00:18:59.292 Format NVM: Not Supported 00:18:59.292 Firmware Activate/Download: Not Supported 00:18:59.292 Namespace Management: Not Supported 00:18:59.292 Device Self-Test: Not Supported 00:18:59.292 Directives: Not Supported 00:18:59.292 NVMe-MI: Not Supported 00:18:59.292 Virtualization Management: Not Supported 00:18:59.292 Doorbell Buffer Config: Not Supported 00:18:59.292 Get LBA Status Capability: Not Supported 00:18:59.292 Command & Feature Lockdown Capability: Not Supported 00:18:59.292 Abort Command Limit: 1 00:18:59.292 Async Event Request Limit: 1 00:18:59.292 Number of Firmware Slots: N/A 00:18:59.292 Firmware Slot 1 Read-Only: N/A 00:18:59.292 Firmware Activation Without Reset: N/A 00:18:59.292 Multiple Update Detection Support: N/A 00:18:59.292 Firmware Update Granularity: No Information Provided 00:18:59.292 Per-Namespace SMART Log: No 00:18:59.293 Asymmetric Namespace Access Log Page: Not Supported 00:18:59.293 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:59.293 Command Effects Log Page: Not Supported 00:18:59.293 Get Log Page Extended Data: Supported 00:18:59.293 Telemetry Log Pages: Not Supported 00:18:59.293 Persistent Event Log Pages: Not Supported 00:18:59.293 Supported Log Pages Log Page: May Support 00:18:59.293 Commands Supported & Effects Log Page: Not Supported 00:18:59.293 Feature Identifiers & Effects Log Page:May Support 00:18:59.293 NVMe-MI Commands & Effects Log Page: May Support 00:18:59.293 Data Area 4 for Telemetry Log: Not Supported 00:18:59.293 Error Log Page Entries Supported: 1 00:18:59.293 Keep Alive: Not Supported 00:18:59.293 00:18:59.293 NVM Command Set Attributes 00:18:59.293 ========================== 00:18:59.293 Submission Queue Entry Size 00:18:59.293 Max: 1 00:18:59.293 Min: 1 00:18:59.293 Completion Queue Entry Size 00:18:59.293 Max: 1 00:18:59.293 Min: 1 00:18:59.293 Number of Namespaces: 0 00:18:59.293 Compare Command: Not Supported 00:18:59.293 Write Uncorrectable Command: Not Supported 00:18:59.293 Dataset Management Command: Not Supported 00:18:59.293 Write Zeroes Command: Not Supported 00:18:59.293 Set Features Save Field: Not Supported 00:18:59.293 Reservations: Not Supported 00:18:59.293 Timestamp: Not Supported 00:18:59.293 Copy: Not Supported 00:18:59.293 Volatile Write Cache: Not Present 00:18:59.293 Atomic Write Unit (Normal): 1 00:18:59.293 Atomic Write Unit (PFail): 1 00:18:59.293 Atomic Compare & Write Unit: 1 00:18:59.293 Fused Compare & Write: Not Supported 00:18:59.293 Scatter-Gather List 00:18:59.293 SGL Command Set: Supported 00:18:59.293 SGL Keyed: Not Supported 00:18:59.293 SGL Bit Bucket Descriptor: Not Supported 00:18:59.293 SGL Metadata Pointer: Not Supported 00:18:59.293 Oversized SGL: Not Supported 00:18:59.293 SGL Metadata Address: Not Supported 00:18:59.293 SGL Offset: Supported 00:18:59.293 Transport SGL Data Block: Not Supported 00:18:59.293 Replay Protected Memory Block: Not Supported 00:18:59.293 00:18:59.293 Firmware Slot Information 00:18:59.293 ========================= 00:18:59.293 Active slot: 0 00:18:59.293 00:18:59.293 00:18:59.293 Error Log 00:18:59.293 ========= 00:18:59.293 00:18:59.293 Active Namespaces 00:18:59.293 ================= 00:18:59.293 Discovery Log Page 00:18:59.293 ================== 00:18:59.293 Generation Counter: 2 00:18:59.293 Number of Records: 2 00:18:59.293 Record Format: 0 00:18:59.293 00:18:59.293 Discovery Log Entry 0 00:18:59.293 ---------------------- 00:18:59.293 Transport Type: 3 (TCP) 00:18:59.293 Address Family: 1 (IPv4) 00:18:59.293 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:59.293 Entry Flags: 00:18:59.293 Duplicate Returned Information: 0 00:18:59.293 Explicit Persistent Connection Support for Discovery: 0 00:18:59.293 Transport Requirements: 00:18:59.293 Secure Channel: Not Specified 00:18:59.293 Port ID: 1 (0x0001) 00:18:59.293 Controller ID: 65535 (0xffff) 00:18:59.293 Admin Max SQ Size: 32 00:18:59.293 Transport Service Identifier: 4420 00:18:59.293 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:59.293 Transport Address: 10.0.0.1 00:18:59.293 Discovery Log Entry 1 00:18:59.293 ---------------------- 00:18:59.293 Transport Type: 3 (TCP) 00:18:59.293 Address Family: 1 (IPv4) 00:18:59.293 Subsystem Type: 2 (NVM Subsystem) 00:18:59.293 Entry Flags: 00:18:59.293 Duplicate Returned Information: 0 00:18:59.293 Explicit Persistent Connection Support for Discovery: 0 00:18:59.293 Transport Requirements: 00:18:59.293 Secure Channel: Not Specified 00:18:59.293 Port ID: 1 (0x0001) 00:18:59.293 Controller ID: 65535 (0xffff) 00:18:59.293 Admin Max SQ Size: 32 00:18:59.293 Transport Service Identifier: 4420 00:18:59.293 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:59.293 Transport Address: 10.0.0.1 00:18:59.293 21:33:24 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:59.293 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.293 get_feature(0x01) failed 00:18:59.293 get_feature(0x02) failed 00:18:59.293 get_feature(0x04) failed 00:18:59.293 ===================================================== 00:18:59.293 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:18:59.293 ===================================================== 00:18:59.293 Controller Capabilities/Features 00:18:59.293 ================================ 00:18:59.293 Vendor ID: 0000 00:18:59.293 Subsystem Vendor ID: 0000 00:18:59.293 Serial Number: 71d7947d8894ea5aaa44 00:18:59.293 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:59.293 Firmware Version: 6.7.0-68 00:18:59.293 Recommended Arb Burst: 6 00:18:59.293 IEEE OUI Identifier: 00 00 00 00:18:59.293 Multi-path I/O 00:18:59.293 May have multiple subsystem ports: Yes 00:18:59.293 May have multiple controllers: Yes 00:18:59.293 Associated with SR-IOV VF: No 00:18:59.293 Max Data Transfer Size: Unlimited 00:18:59.293 Max Number of Namespaces: 1024 00:18:59.293 Max Number of I/O Queues: 128 00:18:59.293 NVMe Specification Version (VS): 1.3 00:18:59.293 NVMe Specification Version (Identify): 1.3 00:18:59.293 Maximum Queue Entries: 1024 00:18:59.293 Contiguous Queues Required: No 00:18:59.293 Arbitration Mechanisms Supported 00:18:59.293 Weighted Round Robin: Not Supported 00:18:59.293 Vendor Specific: Not Supported 00:18:59.293 Reset Timeout: 7500 ms 00:18:59.293 Doorbell Stride: 4 bytes 00:18:59.293 NVM Subsystem Reset: Not Supported 00:18:59.293 Command Sets Supported 00:18:59.293 NVM Command Set: Supported 00:18:59.293 Boot Partition: Not Supported 00:18:59.293 Memory Page Size Minimum: 4096 bytes 00:18:59.293 Memory Page Size Maximum: 4096 bytes 00:18:59.293 Persistent Memory Region: Not Supported 00:18:59.293 Optional Asynchronous Events Supported 00:18:59.293 Namespace Attribute Notices: Supported 00:18:59.293 Firmware Activation Notices: Not Supported 00:18:59.293 ANA Change Notices: Supported 00:18:59.293 PLE Aggregate Log Change Notices: Not Supported 00:18:59.293 LBA Status Info Alert Notices: Not Supported 00:18:59.293 EGE Aggregate Log Change Notices: Not Supported 00:18:59.293 Normal NVM Subsystem Shutdown event: Not Supported 00:18:59.293 Zone Descriptor Change Notices: Not Supported 00:18:59.293 Discovery Log Change Notices: Not Supported 00:18:59.293 Controller Attributes 00:18:59.293 128-bit Host Identifier: Supported 00:18:59.293 Non-Operational Permissive Mode: Not Supported 00:18:59.293 NVM Sets: Not Supported 00:18:59.293 Read Recovery Levels: Not Supported 00:18:59.293 Endurance Groups: Not Supported 00:18:59.293 Predictable Latency Mode: Not Supported 00:18:59.293 Traffic Based Keep ALive: Supported 00:18:59.293 Namespace Granularity: Not Supported 00:18:59.293 SQ Associations: Not Supported 00:18:59.293 UUID List: Not Supported 00:18:59.293 Multi-Domain Subsystem: Not Supported 00:18:59.293 Fixed Capacity Management: Not Supported 00:18:59.293 Variable Capacity Management: Not Supported 00:18:59.293 Delete Endurance Group: Not Supported 00:18:59.293 Delete NVM Set: Not Supported 00:18:59.293 Extended LBA Formats Supported: Not Supported 00:18:59.293 Flexible Data Placement Supported: Not Supported 00:18:59.293 00:18:59.293 Controller Memory Buffer Support 00:18:59.293 ================================ 00:18:59.293 Supported: No 00:18:59.293 00:18:59.293 Persistent Memory Region Support 00:18:59.293 ================================ 00:18:59.293 Supported: No 00:18:59.293 00:18:59.293 Admin Command Set Attributes 00:18:59.293 ============================ 00:18:59.293 Security Send/Receive: Not Supported 00:18:59.293 Format NVM: Not Supported 00:18:59.293 Firmware Activate/Download: Not Supported 00:18:59.293 Namespace Management: Not Supported 00:18:59.293 Device Self-Test: Not Supported 00:18:59.293 Directives: Not Supported 00:18:59.293 NVMe-MI: Not Supported 00:18:59.293 Virtualization Management: Not Supported 00:18:59.293 Doorbell Buffer Config: Not Supported 00:18:59.293 Get LBA Status Capability: Not Supported 00:18:59.293 Command & Feature Lockdown Capability: Not Supported 00:18:59.293 Abort Command Limit: 4 00:18:59.293 Async Event Request Limit: 4 00:18:59.293 Number of Firmware Slots: N/A 00:18:59.293 Firmware Slot 1 Read-Only: N/A 00:18:59.293 Firmware Activation Without Reset: N/A 00:18:59.293 Multiple Update Detection Support: N/A 00:18:59.293 Firmware Update Granularity: No Information Provided 00:18:59.293 Per-Namespace SMART Log: Yes 00:18:59.293 Asymmetric Namespace Access Log Page: Supported 00:18:59.293 ANA Transition Time : 10 sec 00:18:59.293 00:18:59.293 Asymmetric Namespace Access Capabilities 00:18:59.293 ANA Optimized State : Supported 00:18:59.293 ANA Non-Optimized State : Supported 00:18:59.294 ANA Inaccessible State : Supported 00:18:59.294 ANA Persistent Loss State : Supported 00:18:59.294 ANA Change State : Supported 00:18:59.294 ANAGRPID is not changed : No 00:18:59.294 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:59.294 00:18:59.294 ANA Group Identifier Maximum : 128 00:18:59.294 Number of ANA Group Identifiers : 128 00:18:59.294 Max Number of Allowed Namespaces : 1024 00:18:59.294 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:59.294 Command Effects Log Page: Supported 00:18:59.294 Get Log Page Extended Data: Supported 00:18:59.294 Telemetry Log Pages: Not Supported 00:18:59.294 Persistent Event Log Pages: Not Supported 00:18:59.294 Supported Log Pages Log Page: May Support 00:18:59.294 Commands Supported & Effects Log Page: Not Supported 00:18:59.294 Feature Identifiers & Effects Log Page:May Support 00:18:59.294 NVMe-MI Commands & Effects Log Page: May Support 00:18:59.294 Data Area 4 for Telemetry Log: Not Supported 00:18:59.294 Error Log Page Entries Supported: 128 00:18:59.294 Keep Alive: Supported 00:18:59.294 Keep Alive Granularity: 1000 ms 00:18:59.294 00:18:59.294 NVM Command Set Attributes 00:18:59.294 ========================== 00:18:59.294 Submission Queue Entry Size 00:18:59.294 Max: 64 00:18:59.294 Min: 64 00:18:59.294 Completion Queue Entry Size 00:18:59.294 Max: 16 00:18:59.294 Min: 16 00:18:59.294 Number of Namespaces: 1024 00:18:59.294 Compare Command: Not Supported 00:18:59.294 Write Uncorrectable Command: Not Supported 00:18:59.294 Dataset Management Command: Supported 00:18:59.294 Write Zeroes Command: Supported 00:18:59.294 Set Features Save Field: Not Supported 00:18:59.294 Reservations: Not Supported 00:18:59.294 Timestamp: Not Supported 00:18:59.294 Copy: Not Supported 00:18:59.294 Volatile Write Cache: Present 00:18:59.294 Atomic Write Unit (Normal): 1 00:18:59.294 Atomic Write Unit (PFail): 1 00:18:59.294 Atomic Compare & Write Unit: 1 00:18:59.294 Fused Compare & Write: Not Supported 00:18:59.294 Scatter-Gather List 00:18:59.294 SGL Command Set: Supported 00:18:59.294 SGL Keyed: Not Supported 00:18:59.294 SGL Bit Bucket Descriptor: Not Supported 00:18:59.294 SGL Metadata Pointer: Not Supported 00:18:59.294 Oversized SGL: Not Supported 00:18:59.294 SGL Metadata Address: Not Supported 00:18:59.294 SGL Offset: Supported 00:18:59.294 Transport SGL Data Block: Not Supported 00:18:59.294 Replay Protected Memory Block: Not Supported 00:18:59.294 00:18:59.294 Firmware Slot Information 00:18:59.294 ========================= 00:18:59.294 Active slot: 0 00:18:59.294 00:18:59.294 Asymmetric Namespace Access 00:18:59.294 =========================== 00:18:59.294 Change Count : 0 00:18:59.294 Number of ANA Group Descriptors : 1 00:18:59.294 ANA Group Descriptor : 0 00:18:59.294 ANA Group ID : 1 00:18:59.294 Number of NSID Values : 1 00:18:59.294 Change Count : 0 00:18:59.294 ANA State : 1 00:18:59.294 Namespace Identifier : 1 00:18:59.294 00:18:59.294 Commands Supported and Effects 00:18:59.294 ============================== 00:18:59.294 Admin Commands 00:18:59.294 -------------- 00:18:59.294 Get Log Page (02h): Supported 00:18:59.294 Identify (06h): Supported 00:18:59.294 Abort (08h): Supported 00:18:59.294 Set Features (09h): Supported 00:18:59.294 Get Features (0Ah): Supported 00:18:59.294 Asynchronous Event Request (0Ch): Supported 00:18:59.294 Keep Alive (18h): Supported 00:18:59.294 I/O Commands 00:18:59.294 ------------ 00:18:59.294 Flush (00h): Supported 00:18:59.294 Write (01h): Supported LBA-Change 00:18:59.294 Read (02h): Supported 00:18:59.294 Write Zeroes (08h): Supported LBA-Change 00:18:59.294 Dataset Management (09h): Supported 00:18:59.294 00:18:59.294 Error Log 00:18:59.294 ========= 00:18:59.294 Entry: 0 00:18:59.294 Error Count: 0x3 00:18:59.294 Submission Queue Id: 0x0 00:18:59.294 Command Id: 0x5 00:18:59.294 Phase Bit: 0 00:18:59.294 Status Code: 0x2 00:18:59.294 Status Code Type: 0x0 00:18:59.294 Do Not Retry: 1 00:18:59.294 Error Location: 0x28 00:18:59.294 LBA: 0x0 00:18:59.294 Namespace: 0x0 00:18:59.294 Vendor Log Page: 0x0 00:18:59.294 ----------- 00:18:59.294 Entry: 1 00:18:59.294 Error Count: 0x2 00:18:59.294 Submission Queue Id: 0x0 00:18:59.294 Command Id: 0x5 00:18:59.294 Phase Bit: 0 00:18:59.294 Status Code: 0x2 00:18:59.294 Status Code Type: 0x0 00:18:59.294 Do Not Retry: 1 00:18:59.294 Error Location: 0x28 00:18:59.294 LBA: 0x0 00:18:59.294 Namespace: 0x0 00:18:59.294 Vendor Log Page: 0x0 00:18:59.294 ----------- 00:18:59.294 Entry: 2 00:18:59.294 Error Count: 0x1 00:18:59.294 Submission Queue Id: 0x0 00:18:59.294 Command Id: 0x4 00:18:59.294 Phase Bit: 0 00:18:59.294 Status Code: 0x2 00:18:59.294 Status Code Type: 0x0 00:18:59.294 Do Not Retry: 1 00:18:59.294 Error Location: 0x28 00:18:59.294 LBA: 0x0 00:18:59.294 Namespace: 0x0 00:18:59.294 Vendor Log Page: 0x0 00:18:59.294 00:18:59.294 Number of Queues 00:18:59.294 ================ 00:18:59.294 Number of I/O Submission Queues: 128 00:18:59.294 Number of I/O Completion Queues: 128 00:18:59.294 00:18:59.294 ZNS Specific Controller Data 00:18:59.294 ============================ 00:18:59.294 Zone Append Size Limit: 0 00:18:59.294 00:18:59.294 00:18:59.294 Active Namespaces 00:18:59.294 ================= 00:18:59.294 get_feature(0x05) failed 00:18:59.294 Namespace ID:1 00:18:59.294 Command Set Identifier: NVM (00h) 00:18:59.294 Deallocate: Supported 00:18:59.294 Deallocated/Unwritten Error: Not Supported 00:18:59.294 Deallocated Read Value: Unknown 00:18:59.294 Deallocate in Write Zeroes: Not Supported 00:18:59.294 Deallocated Guard Field: 0xFFFF 00:18:59.294 Flush: Supported 00:18:59.294 Reservation: Not Supported 00:18:59.294 Namespace Sharing Capabilities: Multiple Controllers 00:18:59.294 Size (in LBAs): 1953525168 (931GiB) 00:18:59.294 Capacity (in LBAs): 1953525168 (931GiB) 00:18:59.294 Utilization (in LBAs): 1953525168 (931GiB) 00:18:59.294 UUID: bb088496-bfd9-4e96-b80f-2cfd330ef88c 00:18:59.294 Thin Provisioning: Not Supported 00:18:59.294 Per-NS Atomic Units: Yes 00:18:59.294 Atomic Boundary Size (Normal): 0 00:18:59.294 Atomic Boundary Size (PFail): 0 00:18:59.294 Atomic Boundary Offset: 0 00:18:59.294 NGUID/EUI64 Never Reused: No 00:18:59.294 ANA group ID: 1 00:18:59.294 Namespace Write Protected: No 00:18:59.294 Number of LBA Formats: 1 00:18:59.294 Current LBA Format: LBA Format #00 00:18:59.294 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:59.294 00:18:59.294 21:33:24 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:59.294 21:33:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:59.294 21:33:24 -- nvmf/common.sh@117 -- # sync 00:18:59.294 21:33:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:59.294 21:33:24 -- nvmf/common.sh@120 -- # set +e 00:18:59.294 21:33:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:59.294 21:33:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:59.294 rmmod nvme_tcp 00:18:59.294 rmmod nvme_fabrics 00:18:59.294 21:33:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:59.294 21:33:24 -- nvmf/common.sh@124 -- # set -e 00:18:59.294 21:33:24 -- nvmf/common.sh@125 -- # return 0 00:18:59.294 21:33:24 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:18:59.294 21:33:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:59.294 21:33:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:59.294 21:33:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:59.294 21:33:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:59.294 21:33:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:59.294 21:33:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.294 21:33:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.294 21:33:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.825 21:33:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:01.825 21:33:26 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:01.825 21:33:26 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:01.825 21:33:26 -- nvmf/common.sh@675 -- # echo 0 00:19:01.825 21:33:26 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:01.825 21:33:26 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:01.825 21:33:26 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:01.825 21:33:26 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:01.825 21:33:26 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:19:01.825 21:33:26 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:19:01.825 21:33:26 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:19:02.759 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:19:02.759 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:19:02.759 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:19:02.759 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:19:02.759 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:19:02.759 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:19:02.759 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:19:02.759 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:19:02.759 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:19:02.759 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:19:02.759 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:19:02.759 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:19:02.759 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:19:02.759 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:19:02.759 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:19:02.759 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:19:03.694 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:19:03.694 00:19:03.694 real 0m9.427s 00:19:03.694 user 0m2.000s 00:19:03.694 sys 0m3.394s 00:19:03.694 21:33:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:03.694 21:33:29 -- common/autotest_common.sh@10 -- # set +x 00:19:03.694 ************************************ 00:19:03.694 END TEST nvmf_identify_kernel_target 00:19:03.694 ************************************ 00:19:03.694 21:33:29 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:03.694 21:33:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:03.694 21:33:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:03.694 21:33:29 -- common/autotest_common.sh@10 -- # set +x 00:19:03.956 ************************************ 00:19:03.956 START TEST nvmf_auth 00:19:03.956 ************************************ 00:19:03.956 21:33:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:03.956 * Looking for test storage... 00:19:03.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:03.956 21:33:29 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:03.956 21:33:29 -- nvmf/common.sh@7 -- # uname -s 00:19:03.956 21:33:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.956 21:33:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.956 21:33:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.956 21:33:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.956 21:33:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.956 21:33:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.956 21:33:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.956 21:33:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.956 21:33:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.956 21:33:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.956 21:33:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.956 21:33:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.956 21:33:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.956 21:33:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.956 21:33:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:03.956 21:33:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.956 21:33:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:03.956 21:33:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.956 21:33:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.956 21:33:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.956 21:33:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.956 21:33:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.956 21:33:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.956 21:33:29 -- paths/export.sh@5 -- # export PATH 00:19:03.956 21:33:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.956 21:33:29 -- nvmf/common.sh@47 -- # : 0 00:19:03.956 21:33:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:03.956 21:33:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:03.956 21:33:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.956 21:33:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.956 21:33:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.956 21:33:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:03.956 21:33:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:03.956 21:33:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:03.956 21:33:29 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:03.956 21:33:29 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:03.956 21:33:29 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:03.956 21:33:29 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:03.956 21:33:29 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:03.956 21:33:29 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:03.956 21:33:29 -- host/auth.sh@21 -- # keys=() 00:19:03.956 21:33:29 -- host/auth.sh@77 -- # nvmftestinit 00:19:03.956 21:33:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:03.956 21:33:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.956 21:33:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:03.956 21:33:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:03.956 21:33:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:03.956 21:33:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.956 21:33:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:03.956 21:33:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.956 21:33:29 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:03.956 21:33:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:03.956 21:33:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:03.956 21:33:29 -- common/autotest_common.sh@10 -- # set +x 00:19:05.859 21:33:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:05.859 21:33:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:05.859 21:33:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:05.859 21:33:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:05.859 21:33:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:05.859 21:33:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:05.859 21:33:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:05.859 21:33:31 -- nvmf/common.sh@295 -- # net_devs=() 00:19:05.859 21:33:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:05.859 21:33:31 -- nvmf/common.sh@296 -- # e810=() 00:19:05.859 21:33:31 -- nvmf/common.sh@296 -- # local -ga e810 00:19:05.859 21:33:31 -- nvmf/common.sh@297 -- # x722=() 00:19:05.859 21:33:31 -- nvmf/common.sh@297 -- # local -ga x722 00:19:05.859 21:33:31 -- nvmf/common.sh@298 -- # mlx=() 00:19:05.859 21:33:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:05.859 21:33:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:05.859 21:33:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:05.859 21:33:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:05.859 21:33:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:05.859 21:33:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:05.859 21:33:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:05.859 21:33:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:05.859 21:33:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:05.859 21:33:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:05.859 21:33:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:05.859 21:33:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:05.859 21:33:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:05.859 21:33:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:05.859 21:33:31 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:05.859 21:33:31 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:05.859 21:33:31 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:05.859 21:33:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:05.860 21:33:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:05.860 21:33:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:05.860 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:05.860 21:33:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:05.860 21:33:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:05.860 21:33:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.860 21:33:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.860 21:33:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:05.860 21:33:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:05.860 21:33:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:05.860 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:05.860 21:33:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:05.860 21:33:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:05.860 21:33:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.860 21:33:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.860 21:33:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:05.860 21:33:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:05.860 21:33:31 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:05.860 21:33:31 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:05.860 21:33:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:05.860 21:33:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.860 21:33:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:05.860 21:33:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.860 21:33:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:05.860 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:05.860 21:33:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.860 21:33:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:05.860 21:33:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.860 21:33:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:05.860 21:33:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.860 21:33:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:05.860 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:05.860 21:33:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.860 21:33:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:05.860 21:33:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:05.860 21:33:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:05.860 21:33:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:05.860 21:33:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:05.860 21:33:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:05.860 21:33:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:05.860 21:33:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:05.860 21:33:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:05.860 21:33:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:05.860 21:33:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:05.860 21:33:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:05.860 21:33:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:05.860 21:33:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:05.860 21:33:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:05.860 21:33:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:05.860 21:33:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:05.860 21:33:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:05.860 21:33:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:05.860 21:33:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:05.860 21:33:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:05.860 21:33:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:05.860 21:33:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:05.860 21:33:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:05.860 21:33:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:05.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:05.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:19:05.860 00:19:05.860 --- 10.0.0.2 ping statistics --- 00:19:05.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.860 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:19:05.860 21:33:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:05.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:05.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:19:05.860 00:19:05.860 --- 10.0.0.1 ping statistics --- 00:19:05.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.860 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:19:05.860 21:33:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:05.860 21:33:31 -- nvmf/common.sh@411 -- # return 0 00:19:05.860 21:33:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:05.860 21:33:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:05.860 21:33:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:05.860 21:33:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:05.860 21:33:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:05.860 21:33:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:05.860 21:33:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:05.860 21:33:31 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:19:05.860 21:33:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:05.860 21:33:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:05.860 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:19:05.860 21:33:31 -- nvmf/common.sh@470 -- # nvmfpid=2661873 00:19:05.860 21:33:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:05.860 21:33:31 -- nvmf/common.sh@471 -- # waitforlisten 2661873 00:19:05.860 21:33:31 -- common/autotest_common.sh@817 -- # '[' -z 2661873 ']' 00:19:05.860 21:33:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.860 21:33:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:05.860 21:33:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.860 21:33:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:05.860 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:19:06.459 21:33:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:06.459 21:33:31 -- common/autotest_common.sh@850 -- # return 0 00:19:06.459 21:33:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:06.459 21:33:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:06.459 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:19:06.459 21:33:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.459 21:33:31 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:06.459 21:33:31 -- host/auth.sh@81 -- # gen_key null 32 00:19:06.459 21:33:31 -- host/auth.sh@53 -- # local digest len file key 00:19:06.459 21:33:31 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.459 21:33:31 -- host/auth.sh@54 -- # local -A digests 00:19:06.459 21:33:31 -- host/auth.sh@56 -- # digest=null 00:19:06.459 21:33:31 -- host/auth.sh@56 -- # len=32 00:19:06.459 21:33:31 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:06.459 21:33:31 -- host/auth.sh@57 -- # key=95a790bb5e8ece3442fbfcfc9f5f7354 00:19:06.459 21:33:31 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:19:06.459 21:33:31 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.HXy 00:19:06.459 21:33:31 -- host/auth.sh@59 -- # format_dhchap_key 95a790bb5e8ece3442fbfcfc9f5f7354 0 00:19:06.459 21:33:31 -- nvmf/common.sh@708 -- # format_key DHHC-1 95a790bb5e8ece3442fbfcfc9f5f7354 0 00:19:06.459 21:33:31 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:06.459 21:33:31 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:06.459 21:33:31 -- nvmf/common.sh@693 -- # key=95a790bb5e8ece3442fbfcfc9f5f7354 00:19:06.459 21:33:31 -- nvmf/common.sh@693 -- # digest=0 00:19:06.459 21:33:31 -- nvmf/common.sh@694 -- # python - 00:19:06.459 21:33:31 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.HXy 00:19:06.459 21:33:31 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.HXy 00:19:06.459 21:33:31 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.HXy 00:19:06.459 21:33:31 -- host/auth.sh@82 -- # gen_key null 48 00:19:06.459 21:33:31 -- host/auth.sh@53 -- # local digest len file key 00:19:06.459 21:33:31 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.459 21:33:31 -- host/auth.sh@54 -- # local -A digests 00:19:06.459 21:33:31 -- host/auth.sh@56 -- # digest=null 00:19:06.459 21:33:31 -- host/auth.sh@56 -- # len=48 00:19:06.459 21:33:31 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:06.459 21:33:31 -- host/auth.sh@57 -- # key=5f1f07e9dc0cf741b71719cac4adf03eaa8c9ea80d769d57 00:19:06.459 21:33:31 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:19:06.459 21:33:31 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.pn3 00:19:06.459 21:33:31 -- host/auth.sh@59 -- # format_dhchap_key 5f1f07e9dc0cf741b71719cac4adf03eaa8c9ea80d769d57 0 00:19:06.459 21:33:31 -- nvmf/common.sh@708 -- # format_key DHHC-1 5f1f07e9dc0cf741b71719cac4adf03eaa8c9ea80d769d57 0 00:19:06.459 21:33:31 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:06.459 21:33:31 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:06.459 21:33:31 -- nvmf/common.sh@693 -- # key=5f1f07e9dc0cf741b71719cac4adf03eaa8c9ea80d769d57 00:19:06.459 21:33:31 -- nvmf/common.sh@693 -- # digest=0 00:19:06.459 21:33:31 -- nvmf/common.sh@694 -- # python - 00:19:06.459 21:33:31 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.pn3 00:19:06.459 21:33:31 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.pn3 00:19:06.459 21:33:31 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.pn3 00:19:06.459 21:33:31 -- host/auth.sh@83 -- # gen_key sha256 32 00:19:06.459 21:33:31 -- host/auth.sh@53 -- # local digest len file key 00:19:06.459 21:33:31 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.460 21:33:31 -- host/auth.sh@54 -- # local -A digests 00:19:06.460 21:33:31 -- host/auth.sh@56 -- # digest=sha256 00:19:06.460 21:33:31 -- host/auth.sh@56 -- # len=32 00:19:06.460 21:33:31 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:06.460 21:33:31 -- host/auth.sh@57 -- # key=5dba7ed94b1957b3291c097fb9a96304 00:19:06.460 21:33:31 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:19:06.460 21:33:32 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.PZ6 00:19:06.460 21:33:32 -- host/auth.sh@59 -- # format_dhchap_key 5dba7ed94b1957b3291c097fb9a96304 1 00:19:06.460 21:33:32 -- nvmf/common.sh@708 -- # format_key DHHC-1 5dba7ed94b1957b3291c097fb9a96304 1 00:19:06.460 21:33:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:06.460 21:33:32 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:06.460 21:33:32 -- nvmf/common.sh@693 -- # key=5dba7ed94b1957b3291c097fb9a96304 00:19:06.460 21:33:32 -- nvmf/common.sh@693 -- # digest=1 00:19:06.460 21:33:32 -- nvmf/common.sh@694 -- # python - 00:19:06.460 21:33:32 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.PZ6 00:19:06.460 21:33:32 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.PZ6 00:19:06.460 21:33:32 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.PZ6 00:19:06.460 21:33:32 -- host/auth.sh@84 -- # gen_key sha384 48 00:19:06.460 21:33:32 -- host/auth.sh@53 -- # local digest len file key 00:19:06.460 21:33:32 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.460 21:33:32 -- host/auth.sh@54 -- # local -A digests 00:19:06.460 21:33:32 -- host/auth.sh@56 -- # digest=sha384 00:19:06.460 21:33:32 -- host/auth.sh@56 -- # len=48 00:19:06.460 21:33:32 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:06.460 21:33:32 -- host/auth.sh@57 -- # key=d1e7e4c55cbda7efa071fb3913647107fc5825e2e2df15d6 00:19:06.460 21:33:32 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:19:06.460 21:33:32 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.yXc 00:19:06.460 21:33:32 -- host/auth.sh@59 -- # format_dhchap_key d1e7e4c55cbda7efa071fb3913647107fc5825e2e2df15d6 2 00:19:06.460 21:33:32 -- nvmf/common.sh@708 -- # format_key DHHC-1 d1e7e4c55cbda7efa071fb3913647107fc5825e2e2df15d6 2 00:19:06.460 21:33:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:06.460 21:33:32 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:06.460 21:33:32 -- nvmf/common.sh@693 -- # key=d1e7e4c55cbda7efa071fb3913647107fc5825e2e2df15d6 00:19:06.460 21:33:32 -- nvmf/common.sh@693 -- # digest=2 00:19:06.460 21:33:32 -- nvmf/common.sh@694 -- # python - 00:19:06.460 21:33:32 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.yXc 00:19:06.460 21:33:32 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.yXc 00:19:06.460 21:33:32 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.yXc 00:19:06.460 21:33:32 -- host/auth.sh@85 -- # gen_key sha512 64 00:19:06.460 21:33:32 -- host/auth.sh@53 -- # local digest len file key 00:19:06.460 21:33:32 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.460 21:33:32 -- host/auth.sh@54 -- # local -A digests 00:19:06.460 21:33:32 -- host/auth.sh@56 -- # digest=sha512 00:19:06.460 21:33:32 -- host/auth.sh@56 -- # len=64 00:19:06.460 21:33:32 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:06.460 21:33:32 -- host/auth.sh@57 -- # key=9615d4be0698c3d716bea1c305c932b7ef9d81f8a65cdec076cf1ecff3f6b81e 00:19:06.460 21:33:32 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:19:06.460 21:33:32 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.onu 00:19:06.460 21:33:32 -- host/auth.sh@59 -- # format_dhchap_key 9615d4be0698c3d716bea1c305c932b7ef9d81f8a65cdec076cf1ecff3f6b81e 3 00:19:06.460 21:33:32 -- nvmf/common.sh@708 -- # format_key DHHC-1 9615d4be0698c3d716bea1c305c932b7ef9d81f8a65cdec076cf1ecff3f6b81e 3 00:19:06.460 21:33:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:06.460 21:33:32 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:06.460 21:33:32 -- nvmf/common.sh@693 -- # key=9615d4be0698c3d716bea1c305c932b7ef9d81f8a65cdec076cf1ecff3f6b81e 00:19:06.460 21:33:32 -- nvmf/common.sh@693 -- # digest=3 00:19:06.460 21:33:32 -- nvmf/common.sh@694 -- # python - 00:19:06.718 21:33:32 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.onu 00:19:06.718 21:33:32 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.onu 00:19:06.718 21:33:32 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.onu 00:19:06.718 21:33:32 -- host/auth.sh@87 -- # waitforlisten 2661873 00:19:06.718 21:33:32 -- common/autotest_common.sh@817 -- # '[' -z 2661873 ']' 00:19:06.718 21:33:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.718 21:33:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:06.718 21:33:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.718 21:33:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:06.718 21:33:32 -- common/autotest_common.sh@10 -- # set +x 00:19:06.977 21:33:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:06.977 21:33:32 -- common/autotest_common.sh@850 -- # return 0 00:19:06.977 21:33:32 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:06.977 21:33:32 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.HXy 00:19:06.977 21:33:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.977 21:33:32 -- common/autotest_common.sh@10 -- # set +x 00:19:06.977 21:33:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.977 21:33:32 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:06.977 21:33:32 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.pn3 00:19:06.977 21:33:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.977 21:33:32 -- common/autotest_common.sh@10 -- # set +x 00:19:06.977 21:33:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.977 21:33:32 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:06.977 21:33:32 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.PZ6 00:19:06.977 21:33:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.977 21:33:32 -- common/autotest_common.sh@10 -- # set +x 00:19:06.977 21:33:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.977 21:33:32 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:06.977 21:33:32 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.yXc 00:19:06.977 21:33:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.977 21:33:32 -- common/autotest_common.sh@10 -- # set +x 00:19:06.977 21:33:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.977 21:33:32 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:06.977 21:33:32 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.onu 00:19:06.977 21:33:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.977 21:33:32 -- common/autotest_common.sh@10 -- # set +x 00:19:06.977 21:33:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.977 21:33:32 -- host/auth.sh@92 -- # nvmet_auth_init 00:19:06.977 21:33:32 -- host/auth.sh@35 -- # get_main_ns_ip 00:19:06.977 21:33:32 -- nvmf/common.sh@717 -- # local ip 00:19:06.977 21:33:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:06.977 21:33:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:06.977 21:33:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.977 21:33:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.977 21:33:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:06.977 21:33:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.977 21:33:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:06.977 21:33:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:06.977 21:33:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:06.977 21:33:32 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:06.977 21:33:32 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:06.977 21:33:32 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:19:06.977 21:33:32 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:06.977 21:33:32 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:06.977 21:33:32 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:06.977 21:33:32 -- nvmf/common.sh@628 -- # local block nvme 00:19:06.978 21:33:32 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:19:06.978 21:33:32 -- nvmf/common.sh@631 -- # modprobe nvmet 00:19:06.978 21:33:32 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:06.978 21:33:32 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:19:07.916 Waiting for block devices as requested 00:19:07.916 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:19:08.174 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:19:08.174 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:19:08.174 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:19:08.432 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:19:08.432 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:19:08.432 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:19:08.432 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:19:08.690 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:19:08.690 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:19:08.690 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:19:08.690 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:19:08.947 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:19:08.947 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:19:08.947 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:19:09.205 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:19:09.205 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:19:09.772 21:33:35 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:19:09.772 21:33:35 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:09.772 21:33:35 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:19:09.772 21:33:35 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:19:09.772 21:33:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:09.772 21:33:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:09.772 21:33:35 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:19:09.772 21:33:35 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:09.772 21:33:35 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:19:09.772 No valid GPT data, bailing 00:19:09.772 21:33:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:09.772 21:33:35 -- scripts/common.sh@391 -- # pt= 00:19:09.772 21:33:35 -- scripts/common.sh@392 -- # return 1 00:19:09.772 21:33:35 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:19:09.772 21:33:35 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:19:09.772 21:33:35 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:09.772 21:33:35 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:09.772 21:33:35 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:09.772 21:33:35 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:09.772 21:33:35 -- nvmf/common.sh@656 -- # echo 1 00:19:09.772 21:33:35 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:19:09.772 21:33:35 -- nvmf/common.sh@658 -- # echo 1 00:19:09.772 21:33:35 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:19:09.772 21:33:35 -- nvmf/common.sh@661 -- # echo tcp 00:19:09.772 21:33:35 -- nvmf/common.sh@662 -- # echo 4420 00:19:09.772 21:33:35 -- nvmf/common.sh@663 -- # echo ipv4 00:19:09.772 21:33:35 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:09.772 21:33:35 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:19:09.772 00:19:09.772 Discovery Log Number of Records 2, Generation counter 2 00:19:09.772 =====Discovery Log Entry 0====== 00:19:09.772 trtype: tcp 00:19:09.772 adrfam: ipv4 00:19:09.772 subtype: current discovery subsystem 00:19:09.772 treq: not specified, sq flow control disable supported 00:19:09.772 portid: 1 00:19:09.772 trsvcid: 4420 00:19:09.772 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:09.772 traddr: 10.0.0.1 00:19:09.772 eflags: none 00:19:09.772 sectype: none 00:19:09.772 =====Discovery Log Entry 1====== 00:19:09.772 trtype: tcp 00:19:09.772 adrfam: ipv4 00:19:09.772 subtype: nvme subsystem 00:19:09.772 treq: not specified, sq flow control disable supported 00:19:09.772 portid: 1 00:19:09.772 trsvcid: 4420 00:19:09.772 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:09.772 traddr: 10.0.0.1 00:19:09.772 eflags: none 00:19:09.772 sectype: none 00:19:09.772 21:33:35 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:09.772 21:33:35 -- host/auth.sh@37 -- # echo 0 00:19:09.772 21:33:35 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:09.772 21:33:35 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:09.772 21:33:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:09.772 21:33:35 -- host/auth.sh@44 -- # digest=sha256 00:19:09.772 21:33:35 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:09.772 21:33:35 -- host/auth.sh@44 -- # keyid=1 00:19:09.772 21:33:35 -- host/auth.sh@45 -- # key=DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:09.772 21:33:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:09.772 21:33:35 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:09.772 21:33:35 -- host/auth.sh@49 -- # echo DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:09.772 21:33:35 -- host/auth.sh@100 -- # IFS=, 00:19:09.772 21:33:35 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:19:09.772 21:33:35 -- host/auth.sh@100 -- # IFS=, 00:19:09.772 21:33:35 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:09.772 21:33:35 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:09.772 21:33:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:09.772 21:33:35 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:19:09.772 21:33:35 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:09.772 21:33:35 -- host/auth.sh@68 -- # keyid=1 00:19:09.772 21:33:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:09.772 21:33:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.772 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:19:09.772 21:33:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.772 21:33:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:09.772 21:33:35 -- nvmf/common.sh@717 -- # local ip 00:19:09.772 21:33:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:09.772 21:33:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:09.772 21:33:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:09.772 21:33:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:09.772 21:33:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:09.772 21:33:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:09.772 21:33:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:09.772 21:33:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:09.772 21:33:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:09.772 21:33:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:09.772 21:33:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.772 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:19:10.030 nvme0n1 00:19:10.030 21:33:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.030 21:33:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.030 21:33:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.030 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:19:10.030 21:33:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:10.030 21:33:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.030 21:33:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.030 21:33:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.030 21:33:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.030 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:19:10.030 21:33:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.030 21:33:35 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:19:10.030 21:33:35 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.030 21:33:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:10.030 21:33:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:10.030 21:33:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:10.030 21:33:35 -- host/auth.sh@44 -- # digest=sha256 00:19:10.030 21:33:35 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:10.030 21:33:35 -- host/auth.sh@44 -- # keyid=0 00:19:10.030 21:33:35 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:10.030 21:33:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:10.030 21:33:35 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:10.030 21:33:35 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:10.030 21:33:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:19:10.030 21:33:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:10.030 21:33:35 -- host/auth.sh@68 -- # digest=sha256 00:19:10.030 21:33:35 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:10.030 21:33:35 -- host/auth.sh@68 -- # keyid=0 00:19:10.030 21:33:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:10.030 21:33:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.030 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:19:10.031 21:33:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.031 21:33:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:10.031 21:33:35 -- nvmf/common.sh@717 -- # local ip 00:19:10.031 21:33:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:10.031 21:33:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:10.031 21:33:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.031 21:33:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.031 21:33:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:10.031 21:33:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.031 21:33:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:10.031 21:33:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:10.031 21:33:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:10.031 21:33:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:10.031 21:33:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.031 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:19:10.289 nvme0n1 00:19:10.289 21:33:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.289 21:33:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.289 21:33:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.289 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:19:10.289 21:33:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:10.289 21:33:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.289 21:33:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.289 21:33:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.289 21:33:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.289 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:19:10.289 21:33:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.289 21:33:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:10.289 21:33:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:10.289 21:33:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:10.289 21:33:35 -- host/auth.sh@44 -- # digest=sha256 00:19:10.289 21:33:35 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:10.289 21:33:35 -- host/auth.sh@44 -- # keyid=1 00:19:10.289 21:33:35 -- host/auth.sh@45 -- # key=DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:10.289 21:33:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:10.289 21:33:35 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:10.289 21:33:35 -- host/auth.sh@49 -- # echo DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:10.289 21:33:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:19:10.289 21:33:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:10.289 21:33:35 -- host/auth.sh@68 -- # digest=sha256 00:19:10.289 21:33:35 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:10.289 21:33:35 -- host/auth.sh@68 -- # keyid=1 00:19:10.289 21:33:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:10.289 21:33:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.289 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:19:10.289 21:33:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.289 21:33:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:10.289 21:33:35 -- nvmf/common.sh@717 -- # local ip 00:19:10.289 21:33:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:10.289 21:33:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:10.289 21:33:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.289 21:33:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.289 21:33:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:10.289 21:33:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.289 21:33:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:10.289 21:33:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:10.289 21:33:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:10.289 21:33:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:10.289 21:33:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.289 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:19:10.548 nvme0n1 00:19:10.548 21:33:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.548 21:33:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.548 21:33:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.548 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:19:10.548 21:33:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:10.548 21:33:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.548 21:33:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.548 21:33:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.548 21:33:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.548 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:19:10.548 21:33:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.548 21:33:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:10.548 21:33:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:10.548 21:33:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:10.548 21:33:36 -- host/auth.sh@44 -- # digest=sha256 00:19:10.548 21:33:36 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:10.548 21:33:36 -- host/auth.sh@44 -- # keyid=2 00:19:10.548 21:33:36 -- host/auth.sh@45 -- # key=DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:10.548 21:33:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:10.548 21:33:36 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:10.548 21:33:36 -- host/auth.sh@49 -- # echo DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:10.548 21:33:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:19:10.548 21:33:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:10.548 21:33:36 -- host/auth.sh@68 -- # digest=sha256 00:19:10.548 21:33:36 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:10.548 21:33:36 -- host/auth.sh@68 -- # keyid=2 00:19:10.548 21:33:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:10.548 21:33:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.548 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:19:10.548 21:33:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.548 21:33:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:10.548 21:33:36 -- nvmf/common.sh@717 -- # local ip 00:19:10.548 21:33:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:10.548 21:33:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:10.548 21:33:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.548 21:33:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.548 21:33:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:10.548 21:33:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.549 21:33:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:10.549 21:33:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:10.549 21:33:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:10.549 21:33:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:10.549 21:33:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.549 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:19:10.549 nvme0n1 00:19:10.549 21:33:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.549 21:33:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.549 21:33:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.549 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:19:10.549 21:33:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:10.549 21:33:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.807 21:33:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.807 21:33:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.807 21:33:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.807 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:19:10.807 21:33:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.807 21:33:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:10.807 21:33:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:10.807 21:33:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:10.807 21:33:36 -- host/auth.sh@44 -- # digest=sha256 00:19:10.807 21:33:36 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:10.807 21:33:36 -- host/auth.sh@44 -- # keyid=3 00:19:10.807 21:33:36 -- host/auth.sh@45 -- # key=DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:10.807 21:33:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:10.807 21:33:36 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:10.807 21:33:36 -- host/auth.sh@49 -- # echo DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:10.807 21:33:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:19:10.807 21:33:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:10.807 21:33:36 -- host/auth.sh@68 -- # digest=sha256 00:19:10.807 21:33:36 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:10.807 21:33:36 -- host/auth.sh@68 -- # keyid=3 00:19:10.807 21:33:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:10.807 21:33:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.807 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:19:10.807 21:33:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.807 21:33:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:10.807 21:33:36 -- nvmf/common.sh@717 -- # local ip 00:19:10.807 21:33:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:10.807 21:33:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:10.807 21:33:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.807 21:33:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.807 21:33:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:10.807 21:33:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.807 21:33:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:10.807 21:33:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:10.807 21:33:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:10.807 21:33:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:10.807 21:33:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.807 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:19:10.807 nvme0n1 00:19:10.807 21:33:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.807 21:33:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.807 21:33:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.807 21:33:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:10.807 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:19:10.807 21:33:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.807 21:33:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.807 21:33:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.807 21:33:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.807 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:19:10.807 21:33:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.807 21:33:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:10.807 21:33:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:10.807 21:33:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:10.807 21:33:36 -- host/auth.sh@44 -- # digest=sha256 00:19:10.807 21:33:36 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:10.807 21:33:36 -- host/auth.sh@44 -- # keyid=4 00:19:10.807 21:33:36 -- host/auth.sh@45 -- # key=DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:10.807 21:33:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:10.807 21:33:36 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:10.807 21:33:36 -- host/auth.sh@49 -- # echo DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:10.807 21:33:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:19:10.807 21:33:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:10.807 21:33:36 -- host/auth.sh@68 -- # digest=sha256 00:19:10.807 21:33:36 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:10.807 21:33:36 -- host/auth.sh@68 -- # keyid=4 00:19:10.807 21:33:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:10.807 21:33:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.807 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:19:10.807 21:33:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.807 21:33:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:10.807 21:33:36 -- nvmf/common.sh@717 -- # local ip 00:19:10.807 21:33:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:10.807 21:33:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:10.807 21:33:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.807 21:33:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.807 21:33:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:10.807 21:33:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.807 21:33:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:10.807 21:33:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:10.807 21:33:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:10.807 21:33:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:10.807 21:33:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.807 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:19:11.066 nvme0n1 00:19:11.066 21:33:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.066 21:33:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.066 21:33:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.066 21:33:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:11.066 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:19:11.066 21:33:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.066 21:33:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.066 21:33:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.066 21:33:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.066 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:19:11.066 21:33:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.066 21:33:36 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:11.066 21:33:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:11.066 21:33:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:11.066 21:33:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:11.066 21:33:36 -- host/auth.sh@44 -- # digest=sha256 00:19:11.066 21:33:36 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:11.066 21:33:36 -- host/auth.sh@44 -- # keyid=0 00:19:11.066 21:33:36 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:11.066 21:33:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:11.066 21:33:36 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:11.066 21:33:36 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:11.066 21:33:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:19:11.066 21:33:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:11.066 21:33:36 -- host/auth.sh@68 -- # digest=sha256 00:19:11.066 21:33:36 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:11.066 21:33:36 -- host/auth.sh@68 -- # keyid=0 00:19:11.066 21:33:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:11.066 21:33:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.066 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:19:11.066 21:33:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.066 21:33:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:11.066 21:33:36 -- nvmf/common.sh@717 -- # local ip 00:19:11.066 21:33:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:11.066 21:33:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:11.066 21:33:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.066 21:33:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.066 21:33:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:11.066 21:33:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.066 21:33:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:11.066 21:33:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:11.066 21:33:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:11.066 21:33:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:11.066 21:33:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.066 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:19:11.324 nvme0n1 00:19:11.324 21:33:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.324 21:33:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.324 21:33:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:11.324 21:33:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.324 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:19:11.324 21:33:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.324 21:33:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.324 21:33:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.324 21:33:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.324 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:19:11.324 21:33:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.324 21:33:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:11.324 21:33:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:11.324 21:33:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:11.324 21:33:36 -- host/auth.sh@44 -- # digest=sha256 00:19:11.324 21:33:36 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:11.324 21:33:36 -- host/auth.sh@44 -- # keyid=1 00:19:11.324 21:33:36 -- host/auth.sh@45 -- # key=DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:11.324 21:33:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:11.324 21:33:36 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:11.324 21:33:36 -- host/auth.sh@49 -- # echo DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:11.324 21:33:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:19:11.324 21:33:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:11.324 21:33:36 -- host/auth.sh@68 -- # digest=sha256 00:19:11.324 21:33:36 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:11.324 21:33:36 -- host/auth.sh@68 -- # keyid=1 00:19:11.324 21:33:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:11.324 21:33:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.324 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:19:11.324 21:33:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.324 21:33:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:11.324 21:33:36 -- nvmf/common.sh@717 -- # local ip 00:19:11.324 21:33:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:11.324 21:33:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:11.324 21:33:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.324 21:33:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.324 21:33:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:11.324 21:33:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.324 21:33:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:11.324 21:33:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:11.324 21:33:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:11.324 21:33:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:11.324 21:33:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.324 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:19:11.583 nvme0n1 00:19:11.583 21:33:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.583 21:33:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.583 21:33:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.583 21:33:37 -- common/autotest_common.sh@10 -- # set +x 00:19:11.583 21:33:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:11.583 21:33:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.583 21:33:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.583 21:33:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.583 21:33:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.583 21:33:37 -- common/autotest_common.sh@10 -- # set +x 00:19:11.583 21:33:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.583 21:33:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:11.583 21:33:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:11.583 21:33:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:11.583 21:33:37 -- host/auth.sh@44 -- # digest=sha256 00:19:11.583 21:33:37 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:11.583 21:33:37 -- host/auth.sh@44 -- # keyid=2 00:19:11.583 21:33:37 -- host/auth.sh@45 -- # key=DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:11.583 21:33:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:11.583 21:33:37 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:11.583 21:33:37 -- host/auth.sh@49 -- # echo DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:11.583 21:33:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:19:11.583 21:33:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:11.583 21:33:37 -- host/auth.sh@68 -- # digest=sha256 00:19:11.583 21:33:37 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:11.583 21:33:37 -- host/auth.sh@68 -- # keyid=2 00:19:11.583 21:33:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:11.583 21:33:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.583 21:33:37 -- common/autotest_common.sh@10 -- # set +x 00:19:11.583 21:33:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.583 21:33:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:11.583 21:33:37 -- nvmf/common.sh@717 -- # local ip 00:19:11.583 21:33:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:11.583 21:33:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:11.583 21:33:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.583 21:33:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.583 21:33:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:11.583 21:33:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.583 21:33:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:11.583 21:33:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:11.583 21:33:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:11.583 21:33:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:11.583 21:33:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.583 21:33:37 -- common/autotest_common.sh@10 -- # set +x 00:19:11.841 nvme0n1 00:19:11.841 21:33:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.841 21:33:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.841 21:33:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.841 21:33:37 -- common/autotest_common.sh@10 -- # set +x 00:19:11.841 21:33:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:11.841 21:33:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.841 21:33:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.841 21:33:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.841 21:33:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.841 21:33:37 -- common/autotest_common.sh@10 -- # set +x 00:19:11.841 21:33:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.841 21:33:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:11.841 21:33:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:11.841 21:33:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:11.841 21:33:37 -- host/auth.sh@44 -- # digest=sha256 00:19:11.841 21:33:37 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:11.841 21:33:37 -- host/auth.sh@44 -- # keyid=3 00:19:11.841 21:33:37 -- host/auth.sh@45 -- # key=DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:11.841 21:33:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:11.841 21:33:37 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:11.841 21:33:37 -- host/auth.sh@49 -- # echo DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:11.841 21:33:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:19:11.841 21:33:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:11.841 21:33:37 -- host/auth.sh@68 -- # digest=sha256 00:19:11.841 21:33:37 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:11.841 21:33:37 -- host/auth.sh@68 -- # keyid=3 00:19:11.841 21:33:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:11.841 21:33:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.841 21:33:37 -- common/autotest_common.sh@10 -- # set +x 00:19:11.841 21:33:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.841 21:33:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:11.841 21:33:37 -- nvmf/common.sh@717 -- # local ip 00:19:11.841 21:33:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:11.841 21:33:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:11.841 21:33:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.841 21:33:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.841 21:33:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:11.841 21:33:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.841 21:33:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:11.841 21:33:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:11.841 21:33:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:11.841 21:33:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:11.841 21:33:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.841 21:33:37 -- common/autotest_common.sh@10 -- # set +x 00:19:12.100 nvme0n1 00:19:12.100 21:33:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.100 21:33:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.100 21:33:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.100 21:33:37 -- common/autotest_common.sh@10 -- # set +x 00:19:12.100 21:33:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:12.100 21:33:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.100 21:33:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.100 21:33:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.100 21:33:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.100 21:33:37 -- common/autotest_common.sh@10 -- # set +x 00:19:12.100 21:33:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.100 21:33:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:12.100 21:33:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:12.100 21:33:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:12.100 21:33:37 -- host/auth.sh@44 -- # digest=sha256 00:19:12.100 21:33:37 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:12.100 21:33:37 -- host/auth.sh@44 -- # keyid=4 00:19:12.100 21:33:37 -- host/auth.sh@45 -- # key=DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:12.100 21:33:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:12.100 21:33:37 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:12.100 21:33:37 -- host/auth.sh@49 -- # echo DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:12.100 21:33:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:19:12.100 21:33:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:12.100 21:33:37 -- host/auth.sh@68 -- # digest=sha256 00:19:12.100 21:33:37 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:12.100 21:33:37 -- host/auth.sh@68 -- # keyid=4 00:19:12.100 21:33:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:12.100 21:33:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.100 21:33:37 -- common/autotest_common.sh@10 -- # set +x 00:19:12.100 21:33:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.100 21:33:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:12.100 21:33:37 -- nvmf/common.sh@717 -- # local ip 00:19:12.100 21:33:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:12.100 21:33:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:12.100 21:33:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.100 21:33:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.100 21:33:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:12.100 21:33:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.100 21:33:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:12.100 21:33:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:12.100 21:33:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:12.100 21:33:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:12.100 21:33:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.100 21:33:37 -- common/autotest_common.sh@10 -- # set +x 00:19:12.358 nvme0n1 00:19:12.358 21:33:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.358 21:33:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.358 21:33:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.358 21:33:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:12.358 21:33:37 -- common/autotest_common.sh@10 -- # set +x 00:19:12.358 21:33:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.358 21:33:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.358 21:33:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.358 21:33:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.358 21:33:37 -- common/autotest_common.sh@10 -- # set +x 00:19:12.358 21:33:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.359 21:33:37 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.359 21:33:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:12.359 21:33:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:12.359 21:33:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:12.359 21:33:37 -- host/auth.sh@44 -- # digest=sha256 00:19:12.359 21:33:37 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:12.359 21:33:37 -- host/auth.sh@44 -- # keyid=0 00:19:12.359 21:33:37 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:12.359 21:33:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:12.359 21:33:37 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:12.359 21:33:37 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:12.359 21:33:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:19:12.359 21:33:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:12.359 21:33:37 -- host/auth.sh@68 -- # digest=sha256 00:19:12.359 21:33:37 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:12.359 21:33:37 -- host/auth.sh@68 -- # keyid=0 00:19:12.359 21:33:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:12.359 21:33:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.359 21:33:37 -- common/autotest_common.sh@10 -- # set +x 00:19:12.359 21:33:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.359 21:33:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:12.359 21:33:37 -- nvmf/common.sh@717 -- # local ip 00:19:12.359 21:33:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:12.359 21:33:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:12.359 21:33:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.359 21:33:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.359 21:33:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:12.359 21:33:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.359 21:33:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:12.359 21:33:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:12.359 21:33:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:12.359 21:33:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:12.359 21:33:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.359 21:33:37 -- common/autotest_common.sh@10 -- # set +x 00:19:12.617 nvme0n1 00:19:12.617 21:33:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.617 21:33:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.617 21:33:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.617 21:33:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:12.617 21:33:38 -- common/autotest_common.sh@10 -- # set +x 00:19:12.617 21:33:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.617 21:33:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.617 21:33:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.617 21:33:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.617 21:33:38 -- common/autotest_common.sh@10 -- # set +x 00:19:12.876 21:33:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.876 21:33:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:12.876 21:33:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:12.876 21:33:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:12.876 21:33:38 -- host/auth.sh@44 -- # digest=sha256 00:19:12.876 21:33:38 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:12.876 21:33:38 -- host/auth.sh@44 -- # keyid=1 00:19:12.876 21:33:38 -- host/auth.sh@45 -- # key=DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:12.876 21:33:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:12.876 21:33:38 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:12.876 21:33:38 -- host/auth.sh@49 -- # echo DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:12.876 21:33:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:19:12.876 21:33:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:12.876 21:33:38 -- host/auth.sh@68 -- # digest=sha256 00:19:12.876 21:33:38 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:12.876 21:33:38 -- host/auth.sh@68 -- # keyid=1 00:19:12.876 21:33:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:12.876 21:33:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.876 21:33:38 -- common/autotest_common.sh@10 -- # set +x 00:19:12.876 21:33:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.876 21:33:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:12.876 21:33:38 -- nvmf/common.sh@717 -- # local ip 00:19:12.876 21:33:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:12.876 21:33:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:12.876 21:33:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.876 21:33:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.876 21:33:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:12.876 21:33:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.876 21:33:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:12.876 21:33:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:12.876 21:33:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:12.876 21:33:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:12.876 21:33:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.876 21:33:38 -- common/autotest_common.sh@10 -- # set +x 00:19:13.135 nvme0n1 00:19:13.135 21:33:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.135 21:33:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.136 21:33:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:13.136 21:33:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.136 21:33:38 -- common/autotest_common.sh@10 -- # set +x 00:19:13.136 21:33:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.136 21:33:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.136 21:33:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.136 21:33:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.136 21:33:38 -- common/autotest_common.sh@10 -- # set +x 00:19:13.136 21:33:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.136 21:33:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:13.136 21:33:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:13.136 21:33:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:13.136 21:33:38 -- host/auth.sh@44 -- # digest=sha256 00:19:13.136 21:33:38 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:13.136 21:33:38 -- host/auth.sh@44 -- # keyid=2 00:19:13.136 21:33:38 -- host/auth.sh@45 -- # key=DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:13.136 21:33:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:13.136 21:33:38 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:13.136 21:33:38 -- host/auth.sh@49 -- # echo DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:13.136 21:33:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:19:13.136 21:33:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:13.136 21:33:38 -- host/auth.sh@68 -- # digest=sha256 00:19:13.136 21:33:38 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:13.136 21:33:38 -- host/auth.sh@68 -- # keyid=2 00:19:13.136 21:33:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:13.136 21:33:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.136 21:33:38 -- common/autotest_common.sh@10 -- # set +x 00:19:13.136 21:33:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.136 21:33:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:13.136 21:33:38 -- nvmf/common.sh@717 -- # local ip 00:19:13.136 21:33:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:13.136 21:33:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:13.136 21:33:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.136 21:33:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.136 21:33:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:13.136 21:33:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.136 21:33:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:13.136 21:33:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:13.136 21:33:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:13.136 21:33:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:13.136 21:33:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.136 21:33:38 -- common/autotest_common.sh@10 -- # set +x 00:19:13.394 nvme0n1 00:19:13.394 21:33:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.394 21:33:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.394 21:33:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.394 21:33:38 -- common/autotest_common.sh@10 -- # set +x 00:19:13.394 21:33:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:13.394 21:33:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.394 21:33:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.394 21:33:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.394 21:33:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.394 21:33:38 -- common/autotest_common.sh@10 -- # set +x 00:19:13.394 21:33:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.394 21:33:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:13.394 21:33:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:13.394 21:33:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:13.394 21:33:39 -- host/auth.sh@44 -- # digest=sha256 00:19:13.394 21:33:39 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:13.394 21:33:39 -- host/auth.sh@44 -- # keyid=3 00:19:13.394 21:33:39 -- host/auth.sh@45 -- # key=DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:13.394 21:33:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:13.394 21:33:39 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:13.394 21:33:39 -- host/auth.sh@49 -- # echo DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:13.394 21:33:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:19:13.394 21:33:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:13.394 21:33:39 -- host/auth.sh@68 -- # digest=sha256 00:19:13.394 21:33:39 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:13.394 21:33:39 -- host/auth.sh@68 -- # keyid=3 00:19:13.394 21:33:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:13.394 21:33:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.394 21:33:39 -- common/autotest_common.sh@10 -- # set +x 00:19:13.394 21:33:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.394 21:33:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:13.394 21:33:39 -- nvmf/common.sh@717 -- # local ip 00:19:13.394 21:33:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:13.394 21:33:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:13.394 21:33:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.394 21:33:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.394 21:33:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:13.394 21:33:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.394 21:33:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:13.394 21:33:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:13.394 21:33:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:13.394 21:33:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:13.394 21:33:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.394 21:33:39 -- common/autotest_common.sh@10 -- # set +x 00:19:13.653 nvme0n1 00:19:13.653 21:33:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.653 21:33:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.653 21:33:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.653 21:33:39 -- common/autotest_common.sh@10 -- # set +x 00:19:13.653 21:33:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:13.653 21:33:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.911 21:33:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.911 21:33:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.911 21:33:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.911 21:33:39 -- common/autotest_common.sh@10 -- # set +x 00:19:13.911 21:33:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.911 21:33:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:13.911 21:33:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:13.911 21:33:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:13.911 21:33:39 -- host/auth.sh@44 -- # digest=sha256 00:19:13.911 21:33:39 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:13.911 21:33:39 -- host/auth.sh@44 -- # keyid=4 00:19:13.911 21:33:39 -- host/auth.sh@45 -- # key=DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:13.911 21:33:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:13.911 21:33:39 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:13.911 21:33:39 -- host/auth.sh@49 -- # echo DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:13.911 21:33:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:19:13.911 21:33:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:13.911 21:33:39 -- host/auth.sh@68 -- # digest=sha256 00:19:13.911 21:33:39 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:13.911 21:33:39 -- host/auth.sh@68 -- # keyid=4 00:19:13.911 21:33:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:13.911 21:33:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.911 21:33:39 -- common/autotest_common.sh@10 -- # set +x 00:19:13.911 21:33:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.911 21:33:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:13.911 21:33:39 -- nvmf/common.sh@717 -- # local ip 00:19:13.911 21:33:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:13.911 21:33:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:13.911 21:33:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.911 21:33:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.911 21:33:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:13.911 21:33:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.911 21:33:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:13.911 21:33:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:13.911 21:33:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:13.911 21:33:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:13.911 21:33:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.911 21:33:39 -- common/autotest_common.sh@10 -- # set +x 00:19:14.169 nvme0n1 00:19:14.169 21:33:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.169 21:33:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.169 21:33:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.169 21:33:39 -- common/autotest_common.sh@10 -- # set +x 00:19:14.169 21:33:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:14.169 21:33:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.169 21:33:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.169 21:33:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.169 21:33:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.169 21:33:39 -- common/autotest_common.sh@10 -- # set +x 00:19:14.169 21:33:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.169 21:33:39 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.169 21:33:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:14.169 21:33:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:14.169 21:33:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:14.169 21:33:39 -- host/auth.sh@44 -- # digest=sha256 00:19:14.169 21:33:39 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:14.169 21:33:39 -- host/auth.sh@44 -- # keyid=0 00:19:14.169 21:33:39 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:14.169 21:33:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:14.169 21:33:39 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:14.169 21:33:39 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:14.169 21:33:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:19:14.169 21:33:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:14.169 21:33:39 -- host/auth.sh@68 -- # digest=sha256 00:19:14.169 21:33:39 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:14.169 21:33:39 -- host/auth.sh@68 -- # keyid=0 00:19:14.169 21:33:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:14.169 21:33:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.169 21:33:39 -- common/autotest_common.sh@10 -- # set +x 00:19:14.169 21:33:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.169 21:33:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:14.169 21:33:39 -- nvmf/common.sh@717 -- # local ip 00:19:14.169 21:33:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:14.169 21:33:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:14.169 21:33:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.169 21:33:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.169 21:33:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:14.169 21:33:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.169 21:33:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:14.169 21:33:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:14.169 21:33:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:14.169 21:33:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:14.169 21:33:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.169 21:33:39 -- common/autotest_common.sh@10 -- # set +x 00:19:14.734 nvme0n1 00:19:14.734 21:33:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.734 21:33:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.734 21:33:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.734 21:33:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:14.734 21:33:40 -- common/autotest_common.sh@10 -- # set +x 00:19:14.734 21:33:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.734 21:33:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.734 21:33:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.734 21:33:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.734 21:33:40 -- common/autotest_common.sh@10 -- # set +x 00:19:14.734 21:33:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.734 21:33:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:14.734 21:33:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:14.734 21:33:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:14.734 21:33:40 -- host/auth.sh@44 -- # digest=sha256 00:19:14.734 21:33:40 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:14.734 21:33:40 -- host/auth.sh@44 -- # keyid=1 00:19:14.734 21:33:40 -- host/auth.sh@45 -- # key=DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:14.734 21:33:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:14.734 21:33:40 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:14.734 21:33:40 -- host/auth.sh@49 -- # echo DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:14.734 21:33:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:19:14.734 21:33:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:14.734 21:33:40 -- host/auth.sh@68 -- # digest=sha256 00:19:14.734 21:33:40 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:14.734 21:33:40 -- host/auth.sh@68 -- # keyid=1 00:19:14.734 21:33:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:14.734 21:33:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.734 21:33:40 -- common/autotest_common.sh@10 -- # set +x 00:19:14.734 21:33:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.734 21:33:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:14.734 21:33:40 -- nvmf/common.sh@717 -- # local ip 00:19:14.734 21:33:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:14.734 21:33:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:14.734 21:33:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.734 21:33:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.734 21:33:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:14.734 21:33:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.734 21:33:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:14.734 21:33:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:14.734 21:33:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:14.734 21:33:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:14.734 21:33:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.734 21:33:40 -- common/autotest_common.sh@10 -- # set +x 00:19:15.300 nvme0n1 00:19:15.300 21:33:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.300 21:33:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.300 21:33:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.300 21:33:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:15.300 21:33:40 -- common/autotest_common.sh@10 -- # set +x 00:19:15.300 21:33:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.300 21:33:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.300 21:33:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.300 21:33:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.300 21:33:40 -- common/autotest_common.sh@10 -- # set +x 00:19:15.300 21:33:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.300 21:33:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:15.300 21:33:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:15.300 21:33:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:15.300 21:33:40 -- host/auth.sh@44 -- # digest=sha256 00:19:15.300 21:33:40 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:15.300 21:33:40 -- host/auth.sh@44 -- # keyid=2 00:19:15.300 21:33:40 -- host/auth.sh@45 -- # key=DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:15.300 21:33:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:15.300 21:33:40 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:15.300 21:33:40 -- host/auth.sh@49 -- # echo DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:15.300 21:33:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:19:15.300 21:33:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:15.300 21:33:40 -- host/auth.sh@68 -- # digest=sha256 00:19:15.300 21:33:40 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:15.300 21:33:40 -- host/auth.sh@68 -- # keyid=2 00:19:15.300 21:33:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:15.300 21:33:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.300 21:33:40 -- common/autotest_common.sh@10 -- # set +x 00:19:15.300 21:33:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.300 21:33:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:15.300 21:33:40 -- nvmf/common.sh@717 -- # local ip 00:19:15.300 21:33:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:15.300 21:33:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:15.300 21:33:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.300 21:33:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.300 21:33:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:15.300 21:33:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.300 21:33:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:15.300 21:33:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:15.300 21:33:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:15.300 21:33:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:15.300 21:33:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.300 21:33:40 -- common/autotest_common.sh@10 -- # set +x 00:19:15.867 nvme0n1 00:19:15.867 21:33:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.867 21:33:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.867 21:33:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:15.867 21:33:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.867 21:33:41 -- common/autotest_common.sh@10 -- # set +x 00:19:15.867 21:33:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.867 21:33:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.867 21:33:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.867 21:33:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.867 21:33:41 -- common/autotest_common.sh@10 -- # set +x 00:19:15.867 21:33:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.867 21:33:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:15.867 21:33:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:15.867 21:33:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:15.867 21:33:41 -- host/auth.sh@44 -- # digest=sha256 00:19:15.867 21:33:41 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:15.867 21:33:41 -- host/auth.sh@44 -- # keyid=3 00:19:15.867 21:33:41 -- host/auth.sh@45 -- # key=DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:15.867 21:33:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:15.867 21:33:41 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:15.867 21:33:41 -- host/auth.sh@49 -- # echo DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:15.867 21:33:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:19:15.867 21:33:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:15.867 21:33:41 -- host/auth.sh@68 -- # digest=sha256 00:19:15.867 21:33:41 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:15.867 21:33:41 -- host/auth.sh@68 -- # keyid=3 00:19:15.867 21:33:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:15.867 21:33:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.867 21:33:41 -- common/autotest_common.sh@10 -- # set +x 00:19:15.867 21:33:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.867 21:33:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:15.867 21:33:41 -- nvmf/common.sh@717 -- # local ip 00:19:15.867 21:33:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:15.867 21:33:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:15.867 21:33:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.867 21:33:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.867 21:33:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:15.867 21:33:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.867 21:33:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:15.867 21:33:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:15.867 21:33:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:15.867 21:33:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:15.867 21:33:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.867 21:33:41 -- common/autotest_common.sh@10 -- # set +x 00:19:16.438 nvme0n1 00:19:16.438 21:33:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.438 21:33:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.438 21:33:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:16.438 21:33:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.438 21:33:42 -- common/autotest_common.sh@10 -- # set +x 00:19:16.438 21:33:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.438 21:33:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.438 21:33:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.438 21:33:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.438 21:33:42 -- common/autotest_common.sh@10 -- # set +x 00:19:16.438 21:33:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.438 21:33:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:16.438 21:33:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:16.438 21:33:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:16.438 21:33:42 -- host/auth.sh@44 -- # digest=sha256 00:19:16.438 21:33:42 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:16.438 21:33:42 -- host/auth.sh@44 -- # keyid=4 00:19:16.438 21:33:42 -- host/auth.sh@45 -- # key=DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:16.438 21:33:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:16.438 21:33:42 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:16.438 21:33:42 -- host/auth.sh@49 -- # echo DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:16.438 21:33:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:19:16.438 21:33:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:16.438 21:33:42 -- host/auth.sh@68 -- # digest=sha256 00:19:16.438 21:33:42 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:16.438 21:33:42 -- host/auth.sh@68 -- # keyid=4 00:19:16.438 21:33:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:16.438 21:33:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.438 21:33:42 -- common/autotest_common.sh@10 -- # set +x 00:19:16.438 21:33:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.438 21:33:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:16.438 21:33:42 -- nvmf/common.sh@717 -- # local ip 00:19:16.438 21:33:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:16.438 21:33:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:16.438 21:33:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.438 21:33:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.438 21:33:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:16.438 21:33:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.438 21:33:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:16.438 21:33:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:16.438 21:33:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:16.438 21:33:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:16.438 21:33:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.438 21:33:42 -- common/autotest_common.sh@10 -- # set +x 00:19:17.005 nvme0n1 00:19:17.005 21:33:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.005 21:33:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.005 21:33:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.005 21:33:42 -- common/autotest_common.sh@10 -- # set +x 00:19:17.005 21:33:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:17.005 21:33:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.005 21:33:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.005 21:33:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.005 21:33:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.005 21:33:42 -- common/autotest_common.sh@10 -- # set +x 00:19:17.264 21:33:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.264 21:33:42 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.264 21:33:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:17.264 21:33:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:17.264 21:33:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:17.264 21:33:42 -- host/auth.sh@44 -- # digest=sha256 00:19:17.264 21:33:42 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:17.264 21:33:42 -- host/auth.sh@44 -- # keyid=0 00:19:17.264 21:33:42 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:17.264 21:33:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:17.264 21:33:42 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:17.264 21:33:42 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:17.264 21:33:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:19:17.264 21:33:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:17.264 21:33:42 -- host/auth.sh@68 -- # digest=sha256 00:19:17.264 21:33:42 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:17.264 21:33:42 -- host/auth.sh@68 -- # keyid=0 00:19:17.264 21:33:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:17.264 21:33:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.264 21:33:42 -- common/autotest_common.sh@10 -- # set +x 00:19:17.264 21:33:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.264 21:33:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:17.264 21:33:42 -- nvmf/common.sh@717 -- # local ip 00:19:17.264 21:33:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:17.264 21:33:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:17.264 21:33:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.264 21:33:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.264 21:33:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:17.264 21:33:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.264 21:33:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:17.264 21:33:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:17.264 21:33:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:17.264 21:33:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:17.264 21:33:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.264 21:33:42 -- common/autotest_common.sh@10 -- # set +x 00:19:18.197 nvme0n1 00:19:18.197 21:33:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.197 21:33:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.197 21:33:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:18.197 21:33:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.197 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:19:18.197 21:33:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.197 21:33:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.197 21:33:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.197 21:33:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.197 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:19:18.197 21:33:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.197 21:33:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:18.197 21:33:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:18.197 21:33:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:18.197 21:33:43 -- host/auth.sh@44 -- # digest=sha256 00:19:18.197 21:33:43 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:18.197 21:33:43 -- host/auth.sh@44 -- # keyid=1 00:19:18.197 21:33:43 -- host/auth.sh@45 -- # key=DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:18.197 21:33:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:18.197 21:33:43 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:18.197 21:33:43 -- host/auth.sh@49 -- # echo DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:18.197 21:33:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:19:18.197 21:33:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:18.197 21:33:43 -- host/auth.sh@68 -- # digest=sha256 00:19:18.197 21:33:43 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:18.197 21:33:43 -- host/auth.sh@68 -- # keyid=1 00:19:18.197 21:33:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:18.197 21:33:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.197 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:19:18.197 21:33:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.197 21:33:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:18.197 21:33:43 -- nvmf/common.sh@717 -- # local ip 00:19:18.198 21:33:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:18.198 21:33:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:18.198 21:33:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.198 21:33:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.198 21:33:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:18.198 21:33:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.198 21:33:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:18.198 21:33:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:18.198 21:33:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:18.198 21:33:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:18.198 21:33:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.198 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:19:19.130 nvme0n1 00:19:19.130 21:33:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.130 21:33:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.130 21:33:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.130 21:33:44 -- common/autotest_common.sh@10 -- # set +x 00:19:19.130 21:33:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:19.130 21:33:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.130 21:33:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.130 21:33:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.130 21:33:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.130 21:33:44 -- common/autotest_common.sh@10 -- # set +x 00:19:19.130 21:33:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.130 21:33:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:19.130 21:33:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:19.130 21:33:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:19.130 21:33:44 -- host/auth.sh@44 -- # digest=sha256 00:19:19.130 21:33:44 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:19.130 21:33:44 -- host/auth.sh@44 -- # keyid=2 00:19:19.130 21:33:44 -- host/auth.sh@45 -- # key=DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:19.130 21:33:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:19.130 21:33:44 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:19.130 21:33:44 -- host/auth.sh@49 -- # echo DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:19.130 21:33:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:19:19.130 21:33:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:19.130 21:33:44 -- host/auth.sh@68 -- # digest=sha256 00:19:19.130 21:33:44 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:19.130 21:33:44 -- host/auth.sh@68 -- # keyid=2 00:19:19.130 21:33:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:19.130 21:33:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.130 21:33:44 -- common/autotest_common.sh@10 -- # set +x 00:19:19.130 21:33:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.130 21:33:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:19.130 21:33:44 -- nvmf/common.sh@717 -- # local ip 00:19:19.130 21:33:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:19.130 21:33:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:19.130 21:33:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.130 21:33:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.130 21:33:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:19.130 21:33:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.130 21:33:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:19.130 21:33:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:19.130 21:33:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:19.130 21:33:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:19.130 21:33:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.130 21:33:44 -- common/autotest_common.sh@10 -- # set +x 00:19:20.064 nvme0n1 00:19:20.064 21:33:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.064 21:33:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.064 21:33:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.064 21:33:45 -- common/autotest_common.sh@10 -- # set +x 00:19:20.064 21:33:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:20.064 21:33:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.064 21:33:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.064 21:33:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.064 21:33:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.064 21:33:45 -- common/autotest_common.sh@10 -- # set +x 00:19:20.064 21:33:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.064 21:33:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:20.064 21:33:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:20.064 21:33:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:20.065 21:33:45 -- host/auth.sh@44 -- # digest=sha256 00:19:20.065 21:33:45 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:20.065 21:33:45 -- host/auth.sh@44 -- # keyid=3 00:19:20.065 21:33:45 -- host/auth.sh@45 -- # key=DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:20.065 21:33:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:20.065 21:33:45 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:20.065 21:33:45 -- host/auth.sh@49 -- # echo DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:20.065 21:33:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:19:20.065 21:33:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:20.065 21:33:45 -- host/auth.sh@68 -- # digest=sha256 00:19:20.065 21:33:45 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:20.065 21:33:45 -- host/auth.sh@68 -- # keyid=3 00:19:20.065 21:33:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:20.065 21:33:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.065 21:33:45 -- common/autotest_common.sh@10 -- # set +x 00:19:20.065 21:33:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.065 21:33:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:20.065 21:33:45 -- nvmf/common.sh@717 -- # local ip 00:19:20.065 21:33:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:20.065 21:33:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:20.065 21:33:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.065 21:33:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.065 21:33:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:20.065 21:33:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.065 21:33:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:20.065 21:33:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:20.065 21:33:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:20.065 21:33:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:20.065 21:33:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.065 21:33:45 -- common/autotest_common.sh@10 -- # set +x 00:19:20.997 nvme0n1 00:19:20.997 21:33:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.997 21:33:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.997 21:33:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.997 21:33:46 -- common/autotest_common.sh@10 -- # set +x 00:19:20.997 21:33:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:20.997 21:33:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.997 21:33:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.997 21:33:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.997 21:33:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.997 21:33:46 -- common/autotest_common.sh@10 -- # set +x 00:19:20.997 21:33:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.997 21:33:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:20.997 21:33:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:20.997 21:33:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:20.997 21:33:46 -- host/auth.sh@44 -- # digest=sha256 00:19:20.997 21:33:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:20.997 21:33:46 -- host/auth.sh@44 -- # keyid=4 00:19:20.997 21:33:46 -- host/auth.sh@45 -- # key=DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:20.997 21:33:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:20.997 21:33:46 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:20.997 21:33:46 -- host/auth.sh@49 -- # echo DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:20.997 21:33:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:19:20.997 21:33:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:20.997 21:33:46 -- host/auth.sh@68 -- # digest=sha256 00:19:20.997 21:33:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:20.997 21:33:46 -- host/auth.sh@68 -- # keyid=4 00:19:20.997 21:33:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:20.997 21:33:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.997 21:33:46 -- common/autotest_common.sh@10 -- # set +x 00:19:20.997 21:33:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.997 21:33:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:20.997 21:33:46 -- nvmf/common.sh@717 -- # local ip 00:19:20.997 21:33:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:20.997 21:33:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:20.997 21:33:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.997 21:33:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.997 21:33:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:20.998 21:33:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.998 21:33:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:20.998 21:33:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:20.998 21:33:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:20.998 21:33:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:20.998 21:33:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.998 21:33:46 -- common/autotest_common.sh@10 -- # set +x 00:19:21.929 nvme0n1 00:19:21.929 21:33:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.929 21:33:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.929 21:33:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.929 21:33:47 -- common/autotest_common.sh@10 -- # set +x 00:19:21.929 21:33:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:21.929 21:33:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.929 21:33:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.929 21:33:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.929 21:33:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.929 21:33:47 -- common/autotest_common.sh@10 -- # set +x 00:19:22.186 21:33:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.186 21:33:47 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:19:22.186 21:33:47 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.186 21:33:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:22.186 21:33:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:22.186 21:33:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:22.186 21:33:47 -- host/auth.sh@44 -- # digest=sha384 00:19:22.186 21:33:47 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:22.186 21:33:47 -- host/auth.sh@44 -- # keyid=0 00:19:22.186 21:33:47 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:22.186 21:33:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:22.186 21:33:47 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:22.186 21:33:47 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:22.186 21:33:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:19:22.186 21:33:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:22.186 21:33:47 -- host/auth.sh@68 -- # digest=sha384 00:19:22.186 21:33:47 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:22.186 21:33:47 -- host/auth.sh@68 -- # keyid=0 00:19:22.186 21:33:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:22.186 21:33:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.186 21:33:47 -- common/autotest_common.sh@10 -- # set +x 00:19:22.186 21:33:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.186 21:33:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:22.186 21:33:47 -- nvmf/common.sh@717 -- # local ip 00:19:22.186 21:33:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:22.186 21:33:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:22.186 21:33:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.186 21:33:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.186 21:33:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:22.186 21:33:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:22.186 21:33:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:22.186 21:33:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:22.186 21:33:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:22.186 21:33:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:22.186 21:33:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.186 21:33:47 -- common/autotest_common.sh@10 -- # set +x 00:19:22.186 nvme0n1 00:19:22.186 21:33:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.186 21:33:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.186 21:33:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.186 21:33:47 -- common/autotest_common.sh@10 -- # set +x 00:19:22.186 21:33:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:22.187 21:33:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.187 21:33:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.187 21:33:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.187 21:33:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.187 21:33:47 -- common/autotest_common.sh@10 -- # set +x 00:19:22.187 21:33:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.187 21:33:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:22.187 21:33:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:22.187 21:33:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:22.187 21:33:47 -- host/auth.sh@44 -- # digest=sha384 00:19:22.187 21:33:47 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:22.187 21:33:47 -- host/auth.sh@44 -- # keyid=1 00:19:22.187 21:33:47 -- host/auth.sh@45 -- # key=DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:22.187 21:33:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:22.187 21:33:47 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:22.187 21:33:47 -- host/auth.sh@49 -- # echo DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:22.187 21:33:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:19:22.187 21:33:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:22.187 21:33:47 -- host/auth.sh@68 -- # digest=sha384 00:19:22.187 21:33:47 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:22.187 21:33:47 -- host/auth.sh@68 -- # keyid=1 00:19:22.187 21:33:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:22.187 21:33:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.187 21:33:47 -- common/autotest_common.sh@10 -- # set +x 00:19:22.187 21:33:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.187 21:33:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:22.187 21:33:47 -- nvmf/common.sh@717 -- # local ip 00:19:22.187 21:33:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:22.187 21:33:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:22.187 21:33:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.187 21:33:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.187 21:33:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:22.187 21:33:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:22.187 21:33:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:22.187 21:33:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:22.187 21:33:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:22.187 21:33:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:22.187 21:33:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.187 21:33:47 -- common/autotest_common.sh@10 -- # set +x 00:19:22.445 nvme0n1 00:19:22.445 21:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.445 21:33:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.445 21:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.445 21:33:48 -- common/autotest_common.sh@10 -- # set +x 00:19:22.445 21:33:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:22.445 21:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.445 21:33:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.445 21:33:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.445 21:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.445 21:33:48 -- common/autotest_common.sh@10 -- # set +x 00:19:22.445 21:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.445 21:33:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:22.445 21:33:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:22.445 21:33:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:22.445 21:33:48 -- host/auth.sh@44 -- # digest=sha384 00:19:22.445 21:33:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:22.445 21:33:48 -- host/auth.sh@44 -- # keyid=2 00:19:22.445 21:33:48 -- host/auth.sh@45 -- # key=DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:22.445 21:33:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:22.445 21:33:48 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:22.445 21:33:48 -- host/auth.sh@49 -- # echo DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:22.445 21:33:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:19:22.445 21:33:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:22.445 21:33:48 -- host/auth.sh@68 -- # digest=sha384 00:19:22.445 21:33:48 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:22.445 21:33:48 -- host/auth.sh@68 -- # keyid=2 00:19:22.445 21:33:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:22.445 21:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.445 21:33:48 -- common/autotest_common.sh@10 -- # set +x 00:19:22.445 21:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.445 21:33:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:22.445 21:33:48 -- nvmf/common.sh@717 -- # local ip 00:19:22.445 21:33:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:22.445 21:33:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:22.445 21:33:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.445 21:33:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.445 21:33:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:22.445 21:33:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:22.445 21:33:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:22.445 21:33:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:22.445 21:33:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:22.445 21:33:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:22.445 21:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.445 21:33:48 -- common/autotest_common.sh@10 -- # set +x 00:19:22.704 nvme0n1 00:19:22.704 21:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.704 21:33:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.704 21:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.704 21:33:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:22.704 21:33:48 -- common/autotest_common.sh@10 -- # set +x 00:19:22.704 21:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.704 21:33:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.704 21:33:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.704 21:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.704 21:33:48 -- common/autotest_common.sh@10 -- # set +x 00:19:22.704 21:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.704 21:33:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:22.704 21:33:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:22.704 21:33:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:22.704 21:33:48 -- host/auth.sh@44 -- # digest=sha384 00:19:22.704 21:33:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:22.704 21:33:48 -- host/auth.sh@44 -- # keyid=3 00:19:22.704 21:33:48 -- host/auth.sh@45 -- # key=DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:22.704 21:33:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:22.704 21:33:48 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:22.704 21:33:48 -- host/auth.sh@49 -- # echo DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:22.704 21:33:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:19:22.704 21:33:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:22.704 21:33:48 -- host/auth.sh@68 -- # digest=sha384 00:19:22.704 21:33:48 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:22.704 21:33:48 -- host/auth.sh@68 -- # keyid=3 00:19:22.704 21:33:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:22.704 21:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.704 21:33:48 -- common/autotest_common.sh@10 -- # set +x 00:19:22.704 21:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.704 21:33:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:22.704 21:33:48 -- nvmf/common.sh@717 -- # local ip 00:19:22.704 21:33:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:22.704 21:33:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:22.704 21:33:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.704 21:33:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.704 21:33:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:22.704 21:33:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:22.704 21:33:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:22.704 21:33:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:22.704 21:33:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:22.704 21:33:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:22.704 21:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.704 21:33:48 -- common/autotest_common.sh@10 -- # set +x 00:19:22.963 nvme0n1 00:19:22.963 21:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.963 21:33:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.963 21:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.963 21:33:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:22.963 21:33:48 -- common/autotest_common.sh@10 -- # set +x 00:19:22.963 21:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.963 21:33:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.963 21:33:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.963 21:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.963 21:33:48 -- common/autotest_common.sh@10 -- # set +x 00:19:22.963 21:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.963 21:33:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:22.963 21:33:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:22.963 21:33:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:22.963 21:33:48 -- host/auth.sh@44 -- # digest=sha384 00:19:22.963 21:33:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:22.963 21:33:48 -- host/auth.sh@44 -- # keyid=4 00:19:22.963 21:33:48 -- host/auth.sh@45 -- # key=DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:22.963 21:33:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:22.963 21:33:48 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:22.963 21:33:48 -- host/auth.sh@49 -- # echo DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:22.963 21:33:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:19:22.963 21:33:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:22.963 21:33:48 -- host/auth.sh@68 -- # digest=sha384 00:19:22.963 21:33:48 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:22.963 21:33:48 -- host/auth.sh@68 -- # keyid=4 00:19:22.963 21:33:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:22.963 21:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.963 21:33:48 -- common/autotest_common.sh@10 -- # set +x 00:19:22.963 21:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.963 21:33:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:22.963 21:33:48 -- nvmf/common.sh@717 -- # local ip 00:19:22.963 21:33:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:22.963 21:33:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:22.963 21:33:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.963 21:33:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.963 21:33:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:22.963 21:33:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:22.963 21:33:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:22.963 21:33:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:22.963 21:33:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:22.963 21:33:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:22.963 21:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.963 21:33:48 -- common/autotest_common.sh@10 -- # set +x 00:19:23.221 nvme0n1 00:19:23.221 21:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.221 21:33:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.221 21:33:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:23.221 21:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.221 21:33:48 -- common/autotest_common.sh@10 -- # set +x 00:19:23.221 21:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.221 21:33:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.221 21:33:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.221 21:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.221 21:33:48 -- common/autotest_common.sh@10 -- # set +x 00:19:23.221 21:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.221 21:33:48 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.221 21:33:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:23.221 21:33:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:23.221 21:33:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:23.221 21:33:48 -- host/auth.sh@44 -- # digest=sha384 00:19:23.221 21:33:48 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:23.221 21:33:48 -- host/auth.sh@44 -- # keyid=0 00:19:23.221 21:33:48 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:23.221 21:33:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:23.221 21:33:48 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:23.221 21:33:48 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:23.221 21:33:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:19:23.221 21:33:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:23.221 21:33:48 -- host/auth.sh@68 -- # digest=sha384 00:19:23.221 21:33:48 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:23.221 21:33:48 -- host/auth.sh@68 -- # keyid=0 00:19:23.221 21:33:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:23.221 21:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.221 21:33:48 -- common/autotest_common.sh@10 -- # set +x 00:19:23.221 21:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.221 21:33:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:23.221 21:33:48 -- nvmf/common.sh@717 -- # local ip 00:19:23.221 21:33:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:23.221 21:33:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:23.221 21:33:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.221 21:33:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.221 21:33:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:23.221 21:33:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.221 21:33:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:23.221 21:33:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:23.221 21:33:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:23.221 21:33:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:23.221 21:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.221 21:33:48 -- common/autotest_common.sh@10 -- # set +x 00:19:23.479 nvme0n1 00:19:23.479 21:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.479 21:33:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.479 21:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.479 21:33:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:23.479 21:33:48 -- common/autotest_common.sh@10 -- # set +x 00:19:23.479 21:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.479 21:33:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.479 21:33:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.479 21:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.479 21:33:48 -- common/autotest_common.sh@10 -- # set +x 00:19:23.479 21:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.479 21:33:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:23.479 21:33:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:23.479 21:33:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:23.479 21:33:48 -- host/auth.sh@44 -- # digest=sha384 00:19:23.479 21:33:48 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:23.479 21:33:48 -- host/auth.sh@44 -- # keyid=1 00:19:23.479 21:33:48 -- host/auth.sh@45 -- # key=DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:23.479 21:33:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:23.479 21:33:48 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:23.479 21:33:48 -- host/auth.sh@49 -- # echo DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:23.479 21:33:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:19:23.479 21:33:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:23.479 21:33:48 -- host/auth.sh@68 -- # digest=sha384 00:19:23.479 21:33:48 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:23.479 21:33:48 -- host/auth.sh@68 -- # keyid=1 00:19:23.479 21:33:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:23.479 21:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.479 21:33:48 -- common/autotest_common.sh@10 -- # set +x 00:19:23.479 21:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.479 21:33:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:23.479 21:33:49 -- nvmf/common.sh@717 -- # local ip 00:19:23.479 21:33:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:23.479 21:33:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:23.479 21:33:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.479 21:33:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.479 21:33:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:23.479 21:33:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.479 21:33:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:23.479 21:33:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:23.479 21:33:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:23.479 21:33:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:23.479 21:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.479 21:33:49 -- common/autotest_common.sh@10 -- # set +x 00:19:23.737 nvme0n1 00:19:23.737 21:33:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.737 21:33:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.737 21:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.737 21:33:49 -- common/autotest_common.sh@10 -- # set +x 00:19:23.737 21:33:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:23.737 21:33:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.737 21:33:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.737 21:33:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.737 21:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.737 21:33:49 -- common/autotest_common.sh@10 -- # set +x 00:19:23.737 21:33:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.737 21:33:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:23.737 21:33:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:23.737 21:33:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:23.737 21:33:49 -- host/auth.sh@44 -- # digest=sha384 00:19:23.737 21:33:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:23.737 21:33:49 -- host/auth.sh@44 -- # keyid=2 00:19:23.737 21:33:49 -- host/auth.sh@45 -- # key=DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:23.737 21:33:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:23.737 21:33:49 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:23.737 21:33:49 -- host/auth.sh@49 -- # echo DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:23.737 21:33:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:19:23.737 21:33:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:23.737 21:33:49 -- host/auth.sh@68 -- # digest=sha384 00:19:23.737 21:33:49 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:23.737 21:33:49 -- host/auth.sh@68 -- # keyid=2 00:19:23.737 21:33:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:23.737 21:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.737 21:33:49 -- common/autotest_common.sh@10 -- # set +x 00:19:23.737 21:33:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.737 21:33:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:23.737 21:33:49 -- nvmf/common.sh@717 -- # local ip 00:19:23.737 21:33:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:23.737 21:33:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:23.737 21:33:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.737 21:33:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.737 21:33:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:23.737 21:33:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.737 21:33:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:23.737 21:33:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:23.737 21:33:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:23.737 21:33:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:23.737 21:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.737 21:33:49 -- common/autotest_common.sh@10 -- # set +x 00:19:23.994 nvme0n1 00:19:23.994 21:33:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.994 21:33:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.994 21:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.994 21:33:49 -- common/autotest_common.sh@10 -- # set +x 00:19:23.994 21:33:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:23.994 21:33:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.994 21:33:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.994 21:33:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.994 21:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.994 21:33:49 -- common/autotest_common.sh@10 -- # set +x 00:19:23.994 21:33:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.994 21:33:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:23.994 21:33:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:23.994 21:33:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:23.994 21:33:49 -- host/auth.sh@44 -- # digest=sha384 00:19:23.994 21:33:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:23.994 21:33:49 -- host/auth.sh@44 -- # keyid=3 00:19:23.994 21:33:49 -- host/auth.sh@45 -- # key=DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:23.994 21:33:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:23.994 21:33:49 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:23.994 21:33:49 -- host/auth.sh@49 -- # echo DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:23.994 21:33:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:19:23.994 21:33:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:23.994 21:33:49 -- host/auth.sh@68 -- # digest=sha384 00:19:23.994 21:33:49 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:23.994 21:33:49 -- host/auth.sh@68 -- # keyid=3 00:19:23.994 21:33:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:23.994 21:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.994 21:33:49 -- common/autotest_common.sh@10 -- # set +x 00:19:23.994 21:33:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.994 21:33:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:23.994 21:33:49 -- nvmf/common.sh@717 -- # local ip 00:19:23.994 21:33:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:23.994 21:33:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:23.994 21:33:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.994 21:33:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.994 21:33:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:23.994 21:33:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.994 21:33:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:23.994 21:33:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:23.994 21:33:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:23.994 21:33:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:23.994 21:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.994 21:33:49 -- common/autotest_common.sh@10 -- # set +x 00:19:24.251 nvme0n1 00:19:24.251 21:33:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.251 21:33:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.251 21:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.251 21:33:49 -- common/autotest_common.sh@10 -- # set +x 00:19:24.251 21:33:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:24.251 21:33:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.251 21:33:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.251 21:33:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.251 21:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.251 21:33:49 -- common/autotest_common.sh@10 -- # set +x 00:19:24.251 21:33:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.251 21:33:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:24.251 21:33:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:24.251 21:33:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:24.251 21:33:49 -- host/auth.sh@44 -- # digest=sha384 00:19:24.252 21:33:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:24.252 21:33:49 -- host/auth.sh@44 -- # keyid=4 00:19:24.252 21:33:49 -- host/auth.sh@45 -- # key=DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:24.252 21:33:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:24.252 21:33:49 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:24.252 21:33:49 -- host/auth.sh@49 -- # echo DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:24.252 21:33:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:19:24.252 21:33:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:24.252 21:33:49 -- host/auth.sh@68 -- # digest=sha384 00:19:24.252 21:33:49 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:24.252 21:33:49 -- host/auth.sh@68 -- # keyid=4 00:19:24.252 21:33:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:24.252 21:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.252 21:33:49 -- common/autotest_common.sh@10 -- # set +x 00:19:24.252 21:33:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.252 21:33:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:24.252 21:33:49 -- nvmf/common.sh@717 -- # local ip 00:19:24.252 21:33:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:24.252 21:33:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:24.252 21:33:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.252 21:33:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.252 21:33:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:24.252 21:33:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.252 21:33:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:24.252 21:33:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:24.252 21:33:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:24.252 21:33:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:24.252 21:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.252 21:33:49 -- common/autotest_common.sh@10 -- # set +x 00:19:24.509 nvme0n1 00:19:24.509 21:33:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.509 21:33:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.509 21:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.509 21:33:49 -- common/autotest_common.sh@10 -- # set +x 00:19:24.509 21:33:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:24.509 21:33:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.509 21:33:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.509 21:33:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.509 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.509 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:19:24.509 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.509 21:33:50 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.509 21:33:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:24.509 21:33:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:24.509 21:33:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:24.509 21:33:50 -- host/auth.sh@44 -- # digest=sha384 00:19:24.509 21:33:50 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:24.509 21:33:50 -- host/auth.sh@44 -- # keyid=0 00:19:24.509 21:33:50 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:24.509 21:33:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:24.509 21:33:50 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:24.509 21:33:50 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:24.509 21:33:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:19:24.509 21:33:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:24.510 21:33:50 -- host/auth.sh@68 -- # digest=sha384 00:19:24.510 21:33:50 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:24.510 21:33:50 -- host/auth.sh@68 -- # keyid=0 00:19:24.510 21:33:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:24.510 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.510 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:19:24.510 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.510 21:33:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:24.510 21:33:50 -- nvmf/common.sh@717 -- # local ip 00:19:24.510 21:33:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:24.510 21:33:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:24.510 21:33:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.510 21:33:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.510 21:33:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:24.510 21:33:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.510 21:33:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:24.510 21:33:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:24.510 21:33:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:24.510 21:33:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:24.510 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.510 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:19:24.767 nvme0n1 00:19:24.767 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.767 21:33:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.767 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.767 21:33:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:24.767 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:19:24.767 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.767 21:33:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.767 21:33:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.767 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.767 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:19:24.767 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.767 21:33:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:24.767 21:33:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:24.767 21:33:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:24.767 21:33:50 -- host/auth.sh@44 -- # digest=sha384 00:19:24.767 21:33:50 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:24.767 21:33:50 -- host/auth.sh@44 -- # keyid=1 00:19:24.767 21:33:50 -- host/auth.sh@45 -- # key=DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:24.767 21:33:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:24.767 21:33:50 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:24.767 21:33:50 -- host/auth.sh@49 -- # echo DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:24.767 21:33:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:19:24.767 21:33:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:24.767 21:33:50 -- host/auth.sh@68 -- # digest=sha384 00:19:24.767 21:33:50 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:24.767 21:33:50 -- host/auth.sh@68 -- # keyid=1 00:19:24.767 21:33:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:24.767 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.767 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:19:24.767 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.767 21:33:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:24.767 21:33:50 -- nvmf/common.sh@717 -- # local ip 00:19:24.767 21:33:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:24.767 21:33:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:24.767 21:33:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.767 21:33:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.767 21:33:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:24.767 21:33:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.767 21:33:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:24.767 21:33:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:24.767 21:33:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:24.767 21:33:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:24.767 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.767 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:19:25.025 nvme0n1 00:19:25.025 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.025 21:33:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.025 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.025 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:19:25.025 21:33:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:25.282 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.282 21:33:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.282 21:33:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.282 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.282 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:19:25.282 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.282 21:33:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:25.282 21:33:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:25.282 21:33:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:25.282 21:33:50 -- host/auth.sh@44 -- # digest=sha384 00:19:25.282 21:33:50 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:25.282 21:33:50 -- host/auth.sh@44 -- # keyid=2 00:19:25.282 21:33:50 -- host/auth.sh@45 -- # key=DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:25.282 21:33:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:25.282 21:33:50 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:25.282 21:33:50 -- host/auth.sh@49 -- # echo DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:25.282 21:33:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:19:25.282 21:33:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:25.282 21:33:50 -- host/auth.sh@68 -- # digest=sha384 00:19:25.282 21:33:50 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:25.282 21:33:50 -- host/auth.sh@68 -- # keyid=2 00:19:25.282 21:33:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:25.282 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.282 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:19:25.282 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.282 21:33:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:25.282 21:33:50 -- nvmf/common.sh@717 -- # local ip 00:19:25.282 21:33:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:25.282 21:33:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:25.282 21:33:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.282 21:33:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.282 21:33:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:25.282 21:33:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.282 21:33:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:25.282 21:33:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:25.282 21:33:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:25.282 21:33:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:25.282 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.282 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:19:25.540 nvme0n1 00:19:25.540 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.540 21:33:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.540 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.540 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:19:25.540 21:33:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:25.540 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.540 21:33:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.540 21:33:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.540 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.540 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:19:25.540 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.540 21:33:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:25.540 21:33:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:25.540 21:33:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:25.540 21:33:51 -- host/auth.sh@44 -- # digest=sha384 00:19:25.540 21:33:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:25.540 21:33:51 -- host/auth.sh@44 -- # keyid=3 00:19:25.540 21:33:51 -- host/auth.sh@45 -- # key=DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:25.540 21:33:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:25.540 21:33:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:25.540 21:33:51 -- host/auth.sh@49 -- # echo DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:25.540 21:33:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:19:25.540 21:33:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:25.540 21:33:51 -- host/auth.sh@68 -- # digest=sha384 00:19:25.540 21:33:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:25.540 21:33:51 -- host/auth.sh@68 -- # keyid=3 00:19:25.540 21:33:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:25.540 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.540 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:19:25.540 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.540 21:33:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:25.540 21:33:51 -- nvmf/common.sh@717 -- # local ip 00:19:25.540 21:33:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:25.540 21:33:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:25.540 21:33:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.540 21:33:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.540 21:33:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:25.540 21:33:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.540 21:33:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:25.540 21:33:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:25.540 21:33:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:25.540 21:33:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:25.540 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.540 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:19:25.798 nvme0n1 00:19:25.798 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.798 21:33:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.798 21:33:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:25.798 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.798 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:19:25.798 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.798 21:33:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.798 21:33:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.798 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.798 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:19:25.798 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.798 21:33:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:25.798 21:33:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:25.798 21:33:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:25.798 21:33:51 -- host/auth.sh@44 -- # digest=sha384 00:19:25.798 21:33:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:25.798 21:33:51 -- host/auth.sh@44 -- # keyid=4 00:19:25.798 21:33:51 -- host/auth.sh@45 -- # key=DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:25.798 21:33:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:25.798 21:33:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:25.798 21:33:51 -- host/auth.sh@49 -- # echo DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:25.798 21:33:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:19:25.798 21:33:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:25.798 21:33:51 -- host/auth.sh@68 -- # digest=sha384 00:19:25.798 21:33:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:25.798 21:33:51 -- host/auth.sh@68 -- # keyid=4 00:19:25.798 21:33:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:25.798 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.798 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:19:25.798 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.798 21:33:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:25.798 21:33:51 -- nvmf/common.sh@717 -- # local ip 00:19:25.798 21:33:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:25.798 21:33:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:25.798 21:33:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.798 21:33:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.798 21:33:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:25.798 21:33:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.798 21:33:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:25.798 21:33:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:25.798 21:33:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:25.798 21:33:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:25.798 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.798 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:19:26.364 nvme0n1 00:19:26.364 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.364 21:33:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.364 21:33:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:26.364 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.364 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:19:26.364 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.364 21:33:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.364 21:33:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.364 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.364 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:19:26.364 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.364 21:33:51 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.364 21:33:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:26.364 21:33:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:26.364 21:33:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:26.364 21:33:51 -- host/auth.sh@44 -- # digest=sha384 00:19:26.364 21:33:51 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:26.364 21:33:51 -- host/auth.sh@44 -- # keyid=0 00:19:26.364 21:33:51 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:26.364 21:33:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:26.364 21:33:51 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:26.364 21:33:51 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:26.364 21:33:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:19:26.364 21:33:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:26.364 21:33:51 -- host/auth.sh@68 -- # digest=sha384 00:19:26.364 21:33:51 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:26.364 21:33:51 -- host/auth.sh@68 -- # keyid=0 00:19:26.364 21:33:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:26.364 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.364 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:19:26.364 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.364 21:33:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:26.364 21:33:51 -- nvmf/common.sh@717 -- # local ip 00:19:26.364 21:33:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:26.364 21:33:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:26.365 21:33:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.365 21:33:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.365 21:33:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:26.365 21:33:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.365 21:33:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:26.365 21:33:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:26.365 21:33:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:26.365 21:33:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:26.365 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.365 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:19:26.932 nvme0n1 00:19:26.932 21:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.932 21:33:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.932 21:33:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:26.932 21:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.932 21:33:52 -- common/autotest_common.sh@10 -- # set +x 00:19:26.932 21:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.932 21:33:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.932 21:33:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.932 21:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.932 21:33:52 -- common/autotest_common.sh@10 -- # set +x 00:19:26.932 21:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.932 21:33:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:26.932 21:33:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:26.932 21:33:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:26.932 21:33:52 -- host/auth.sh@44 -- # digest=sha384 00:19:26.932 21:33:52 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:26.932 21:33:52 -- host/auth.sh@44 -- # keyid=1 00:19:26.932 21:33:52 -- host/auth.sh@45 -- # key=DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:26.932 21:33:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:26.932 21:33:52 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:26.932 21:33:52 -- host/auth.sh@49 -- # echo DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:26.932 21:33:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:19:26.932 21:33:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:26.932 21:33:52 -- host/auth.sh@68 -- # digest=sha384 00:19:26.932 21:33:52 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:26.932 21:33:52 -- host/auth.sh@68 -- # keyid=1 00:19:26.932 21:33:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:26.932 21:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.932 21:33:52 -- common/autotest_common.sh@10 -- # set +x 00:19:26.932 21:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.932 21:33:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:26.932 21:33:52 -- nvmf/common.sh@717 -- # local ip 00:19:26.932 21:33:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:26.932 21:33:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:26.932 21:33:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.932 21:33:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.932 21:33:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:26.932 21:33:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.932 21:33:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:26.932 21:33:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:26.932 21:33:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:26.932 21:33:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:26.932 21:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.932 21:33:52 -- common/autotest_common.sh@10 -- # set +x 00:19:27.498 nvme0n1 00:19:27.498 21:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.498 21:33:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.498 21:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.498 21:33:52 -- common/autotest_common.sh@10 -- # set +x 00:19:27.498 21:33:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:27.498 21:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.498 21:33:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.498 21:33:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.498 21:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.498 21:33:52 -- common/autotest_common.sh@10 -- # set +x 00:19:27.498 21:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.498 21:33:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:27.498 21:33:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:27.498 21:33:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:27.498 21:33:52 -- host/auth.sh@44 -- # digest=sha384 00:19:27.498 21:33:52 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:27.498 21:33:52 -- host/auth.sh@44 -- # keyid=2 00:19:27.498 21:33:52 -- host/auth.sh@45 -- # key=DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:27.498 21:33:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:27.498 21:33:52 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:27.498 21:33:52 -- host/auth.sh@49 -- # echo DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:27.498 21:33:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:19:27.498 21:33:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:27.498 21:33:52 -- host/auth.sh@68 -- # digest=sha384 00:19:27.498 21:33:52 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:27.498 21:33:52 -- host/auth.sh@68 -- # keyid=2 00:19:27.498 21:33:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:27.498 21:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.498 21:33:52 -- common/autotest_common.sh@10 -- # set +x 00:19:27.498 21:33:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.498 21:33:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:27.498 21:33:53 -- nvmf/common.sh@717 -- # local ip 00:19:27.498 21:33:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:27.498 21:33:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:27.498 21:33:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.498 21:33:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.498 21:33:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:27.499 21:33:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.499 21:33:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:27.499 21:33:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:27.499 21:33:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:27.499 21:33:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:27.499 21:33:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.499 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:19:28.064 nvme0n1 00:19:28.064 21:33:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.064 21:33:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.064 21:33:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.064 21:33:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:28.064 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:19:28.064 21:33:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.064 21:33:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.064 21:33:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.064 21:33:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.064 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:19:28.064 21:33:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.064 21:33:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:28.064 21:33:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:28.064 21:33:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:28.064 21:33:53 -- host/auth.sh@44 -- # digest=sha384 00:19:28.064 21:33:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:28.064 21:33:53 -- host/auth.sh@44 -- # keyid=3 00:19:28.064 21:33:53 -- host/auth.sh@45 -- # key=DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:28.064 21:33:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:28.064 21:33:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:28.064 21:33:53 -- host/auth.sh@49 -- # echo DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:28.064 21:33:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:19:28.064 21:33:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:28.064 21:33:53 -- host/auth.sh@68 -- # digest=sha384 00:19:28.064 21:33:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:28.064 21:33:53 -- host/auth.sh@68 -- # keyid=3 00:19:28.064 21:33:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:28.064 21:33:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.064 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:19:28.064 21:33:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.064 21:33:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:28.064 21:33:53 -- nvmf/common.sh@717 -- # local ip 00:19:28.064 21:33:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:28.064 21:33:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:28.064 21:33:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.064 21:33:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.064 21:33:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:28.064 21:33:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.064 21:33:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:28.064 21:33:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:28.064 21:33:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:28.064 21:33:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:28.064 21:33:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.064 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:19:28.640 nvme0n1 00:19:28.640 21:33:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.640 21:33:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.640 21:33:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.640 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:19:28.640 21:33:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:28.640 21:33:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.640 21:33:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.640 21:33:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.640 21:33:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.640 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:19:28.640 21:33:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.640 21:33:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:28.640 21:33:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:28.640 21:33:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:28.640 21:33:54 -- host/auth.sh@44 -- # digest=sha384 00:19:28.640 21:33:54 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:28.640 21:33:54 -- host/auth.sh@44 -- # keyid=4 00:19:28.640 21:33:54 -- host/auth.sh@45 -- # key=DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:28.640 21:33:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:28.640 21:33:54 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:28.640 21:33:54 -- host/auth.sh@49 -- # echo DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:28.640 21:33:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:19:28.640 21:33:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:28.640 21:33:54 -- host/auth.sh@68 -- # digest=sha384 00:19:28.640 21:33:54 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:28.640 21:33:54 -- host/auth.sh@68 -- # keyid=4 00:19:28.640 21:33:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:28.640 21:33:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.640 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:19:28.640 21:33:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.640 21:33:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:28.640 21:33:54 -- nvmf/common.sh@717 -- # local ip 00:19:28.640 21:33:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:28.640 21:33:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:28.640 21:33:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.640 21:33:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.640 21:33:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:28.640 21:33:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.640 21:33:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:28.640 21:33:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:28.640 21:33:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:28.640 21:33:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:28.640 21:33:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.640 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:19:29.205 nvme0n1 00:19:29.205 21:33:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.205 21:33:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:29.205 21:33:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.205 21:33:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.205 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:19:29.205 21:33:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.205 21:33:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.205 21:33:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.205 21:33:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.205 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:19:29.205 21:33:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.205 21:33:54 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.205 21:33:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:29.205 21:33:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:29.205 21:33:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:29.205 21:33:54 -- host/auth.sh@44 -- # digest=sha384 00:19:29.205 21:33:54 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:29.205 21:33:54 -- host/auth.sh@44 -- # keyid=0 00:19:29.205 21:33:54 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:29.205 21:33:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:29.205 21:33:54 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:29.205 21:33:54 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:29.205 21:33:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:19:29.205 21:33:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:29.205 21:33:54 -- host/auth.sh@68 -- # digest=sha384 00:19:29.205 21:33:54 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:29.205 21:33:54 -- host/auth.sh@68 -- # keyid=0 00:19:29.205 21:33:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:29.205 21:33:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.205 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:19:29.205 21:33:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.205 21:33:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:29.205 21:33:54 -- nvmf/common.sh@717 -- # local ip 00:19:29.205 21:33:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:29.205 21:33:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:29.205 21:33:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.205 21:33:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.205 21:33:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:29.205 21:33:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:29.205 21:33:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:29.205 21:33:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:29.205 21:33:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:29.205 21:33:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:29.205 21:33:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.205 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:19:30.140 nvme0n1 00:19:30.140 21:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.140 21:33:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.140 21:33:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:30.140 21:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.140 21:33:55 -- common/autotest_common.sh@10 -- # set +x 00:19:30.140 21:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.140 21:33:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.140 21:33:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.140 21:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.140 21:33:55 -- common/autotest_common.sh@10 -- # set +x 00:19:30.140 21:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.140 21:33:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:30.140 21:33:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:30.140 21:33:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:30.140 21:33:55 -- host/auth.sh@44 -- # digest=sha384 00:19:30.140 21:33:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:30.140 21:33:55 -- host/auth.sh@44 -- # keyid=1 00:19:30.140 21:33:55 -- host/auth.sh@45 -- # key=DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:30.140 21:33:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:30.140 21:33:55 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:30.140 21:33:55 -- host/auth.sh@49 -- # echo DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:30.140 21:33:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:19:30.140 21:33:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:30.140 21:33:55 -- host/auth.sh@68 -- # digest=sha384 00:19:30.140 21:33:55 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:30.140 21:33:55 -- host/auth.sh@68 -- # keyid=1 00:19:30.140 21:33:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:30.140 21:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.140 21:33:55 -- common/autotest_common.sh@10 -- # set +x 00:19:30.140 21:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.140 21:33:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:30.140 21:33:55 -- nvmf/common.sh@717 -- # local ip 00:19:30.140 21:33:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:30.140 21:33:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:30.140 21:33:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.140 21:33:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.140 21:33:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:30.140 21:33:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.140 21:33:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:30.140 21:33:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:30.140 21:33:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:30.140 21:33:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:30.140 21:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.140 21:33:55 -- common/autotest_common.sh@10 -- # set +x 00:19:31.512 nvme0n1 00:19:31.512 21:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.512 21:33:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.512 21:33:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:31.512 21:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.512 21:33:56 -- common/autotest_common.sh@10 -- # set +x 00:19:31.512 21:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.512 21:33:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.513 21:33:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.513 21:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.513 21:33:56 -- common/autotest_common.sh@10 -- # set +x 00:19:31.513 21:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.513 21:33:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:31.513 21:33:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:31.513 21:33:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:31.513 21:33:56 -- host/auth.sh@44 -- # digest=sha384 00:19:31.513 21:33:56 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:31.513 21:33:56 -- host/auth.sh@44 -- # keyid=2 00:19:31.513 21:33:56 -- host/auth.sh@45 -- # key=DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:31.513 21:33:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:31.513 21:33:56 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:31.513 21:33:56 -- host/auth.sh@49 -- # echo DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:31.513 21:33:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:19:31.513 21:33:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:31.513 21:33:56 -- host/auth.sh@68 -- # digest=sha384 00:19:31.513 21:33:56 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:31.513 21:33:56 -- host/auth.sh@68 -- # keyid=2 00:19:31.513 21:33:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:31.513 21:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.513 21:33:56 -- common/autotest_common.sh@10 -- # set +x 00:19:31.513 21:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.513 21:33:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:31.513 21:33:56 -- nvmf/common.sh@717 -- # local ip 00:19:31.513 21:33:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:31.513 21:33:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:31.513 21:33:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.513 21:33:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.513 21:33:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:31.513 21:33:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:31.513 21:33:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:31.513 21:33:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:31.513 21:33:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:31.513 21:33:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:31.513 21:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.513 21:33:56 -- common/autotest_common.sh@10 -- # set +x 00:19:32.446 nvme0n1 00:19:32.446 21:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.446 21:33:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:32.446 21:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.446 21:33:57 -- common/autotest_common.sh@10 -- # set +x 00:19:32.446 21:33:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:32.446 21:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.446 21:33:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.446 21:33:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:32.446 21:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.446 21:33:57 -- common/autotest_common.sh@10 -- # set +x 00:19:32.446 21:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.446 21:33:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:32.446 21:33:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:32.446 21:33:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:32.446 21:33:57 -- host/auth.sh@44 -- # digest=sha384 00:19:32.446 21:33:57 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:32.446 21:33:57 -- host/auth.sh@44 -- # keyid=3 00:19:32.446 21:33:57 -- host/auth.sh@45 -- # key=DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:32.446 21:33:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:32.446 21:33:57 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:32.446 21:33:57 -- host/auth.sh@49 -- # echo DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:32.446 21:33:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:19:32.446 21:33:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:32.446 21:33:57 -- host/auth.sh@68 -- # digest=sha384 00:19:32.446 21:33:57 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:32.446 21:33:57 -- host/auth.sh@68 -- # keyid=3 00:19:32.446 21:33:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:32.446 21:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.446 21:33:57 -- common/autotest_common.sh@10 -- # set +x 00:19:32.446 21:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.446 21:33:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:32.446 21:33:57 -- nvmf/common.sh@717 -- # local ip 00:19:32.446 21:33:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:32.446 21:33:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:32.446 21:33:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:32.446 21:33:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:32.446 21:33:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:32.446 21:33:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:32.446 21:33:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:32.446 21:33:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:32.446 21:33:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:32.446 21:33:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:32.446 21:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.446 21:33:57 -- common/autotest_common.sh@10 -- # set +x 00:19:33.379 nvme0n1 00:19:33.379 21:33:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.379 21:33:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.379 21:33:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:33.379 21:33:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.379 21:33:58 -- common/autotest_common.sh@10 -- # set +x 00:19:33.379 21:33:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.379 21:33:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.379 21:33:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.379 21:33:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.379 21:33:58 -- common/autotest_common.sh@10 -- # set +x 00:19:33.379 21:33:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.379 21:33:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:33.379 21:33:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:33.379 21:33:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:33.379 21:33:58 -- host/auth.sh@44 -- # digest=sha384 00:19:33.379 21:33:58 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:33.379 21:33:58 -- host/auth.sh@44 -- # keyid=4 00:19:33.379 21:33:58 -- host/auth.sh@45 -- # key=DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:33.379 21:33:58 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:33.379 21:33:58 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:33.379 21:33:58 -- host/auth.sh@49 -- # echo DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:33.379 21:33:58 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:19:33.379 21:33:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:33.379 21:33:58 -- host/auth.sh@68 -- # digest=sha384 00:19:33.379 21:33:58 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:33.379 21:33:58 -- host/auth.sh@68 -- # keyid=4 00:19:33.379 21:33:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:33.379 21:33:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.379 21:33:58 -- common/autotest_common.sh@10 -- # set +x 00:19:33.379 21:33:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.379 21:33:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:33.379 21:33:58 -- nvmf/common.sh@717 -- # local ip 00:19:33.379 21:33:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:33.379 21:33:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:33.379 21:33:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.379 21:33:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.379 21:33:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:33.379 21:33:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:33.379 21:33:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:33.379 21:33:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:33.380 21:33:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:33.380 21:33:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:33.380 21:33:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.380 21:33:58 -- common/autotest_common.sh@10 -- # set +x 00:19:34.313 nvme0n1 00:19:34.313 21:33:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.313 21:33:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.313 21:33:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:34.313 21:33:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.313 21:33:59 -- common/autotest_common.sh@10 -- # set +x 00:19:34.313 21:33:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.313 21:33:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.313 21:33:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.313 21:33:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.313 21:33:59 -- common/autotest_common.sh@10 -- # set +x 00:19:34.313 21:33:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.314 21:33:59 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:19:34.314 21:33:59 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.314 21:33:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:34.314 21:33:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:34.314 21:33:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:34.314 21:33:59 -- host/auth.sh@44 -- # digest=sha512 00:19:34.314 21:33:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:34.314 21:33:59 -- host/auth.sh@44 -- # keyid=0 00:19:34.314 21:33:59 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:34.314 21:33:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:34.314 21:33:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:34.314 21:33:59 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:34.314 21:33:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:19:34.314 21:33:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:34.314 21:33:59 -- host/auth.sh@68 -- # digest=sha512 00:19:34.314 21:33:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:34.314 21:33:59 -- host/auth.sh@68 -- # keyid=0 00:19:34.314 21:33:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.314 21:33:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.314 21:33:59 -- common/autotest_common.sh@10 -- # set +x 00:19:34.314 21:33:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.314 21:33:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:34.314 21:33:59 -- nvmf/common.sh@717 -- # local ip 00:19:34.314 21:33:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:34.314 21:33:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:34.314 21:33:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.314 21:33:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.314 21:33:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:34.314 21:33:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.314 21:33:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:34.314 21:33:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:34.314 21:33:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:34.314 21:33:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:34.314 21:33:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.314 21:33:59 -- common/autotest_common.sh@10 -- # set +x 00:19:34.572 nvme0n1 00:19:34.572 21:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.572 21:34:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.572 21:34:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:34.572 21:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.572 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:19:34.572 21:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.572 21:34:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.572 21:34:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.572 21:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.572 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:19:34.572 21:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.572 21:34:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:34.572 21:34:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:34.572 21:34:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:34.572 21:34:00 -- host/auth.sh@44 -- # digest=sha512 00:19:34.572 21:34:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:34.572 21:34:00 -- host/auth.sh@44 -- # keyid=1 00:19:34.572 21:34:00 -- host/auth.sh@45 -- # key=DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:34.572 21:34:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:34.572 21:34:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:34.572 21:34:00 -- host/auth.sh@49 -- # echo DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:34.572 21:34:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:19:34.572 21:34:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:34.572 21:34:00 -- host/auth.sh@68 -- # digest=sha512 00:19:34.572 21:34:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:34.572 21:34:00 -- host/auth.sh@68 -- # keyid=1 00:19:34.572 21:34:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.572 21:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.573 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:19:34.573 21:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.573 21:34:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:34.573 21:34:00 -- nvmf/common.sh@717 -- # local ip 00:19:34.573 21:34:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:34.573 21:34:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:34.573 21:34:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.573 21:34:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.573 21:34:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:34.573 21:34:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.573 21:34:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:34.573 21:34:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:34.573 21:34:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:34.573 21:34:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:34.573 21:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.573 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:19:34.573 nvme0n1 00:19:34.573 21:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.830 21:34:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.830 21:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.830 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:19:34.830 21:34:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:34.830 21:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.830 21:34:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.830 21:34:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.830 21:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.830 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:19:34.830 21:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.830 21:34:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:34.830 21:34:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:34.830 21:34:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:34.830 21:34:00 -- host/auth.sh@44 -- # digest=sha512 00:19:34.830 21:34:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:34.830 21:34:00 -- host/auth.sh@44 -- # keyid=2 00:19:34.830 21:34:00 -- host/auth.sh@45 -- # key=DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:34.830 21:34:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:34.830 21:34:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:34.830 21:34:00 -- host/auth.sh@49 -- # echo DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:34.830 21:34:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:19:34.830 21:34:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:34.830 21:34:00 -- host/auth.sh@68 -- # digest=sha512 00:19:34.830 21:34:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:34.830 21:34:00 -- host/auth.sh@68 -- # keyid=2 00:19:34.830 21:34:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.830 21:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.830 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:19:34.831 21:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.831 21:34:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:34.831 21:34:00 -- nvmf/common.sh@717 -- # local ip 00:19:34.831 21:34:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:34.831 21:34:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:34.831 21:34:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.831 21:34:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.831 21:34:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:34.831 21:34:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.831 21:34:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:34.831 21:34:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:34.831 21:34:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:34.831 21:34:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:34.831 21:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.831 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:19:34.831 nvme0n1 00:19:34.831 21:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.831 21:34:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.831 21:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.831 21:34:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:34.831 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:19:34.831 21:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.089 21:34:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.089 21:34:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.089 21:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.089 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:19:35.089 21:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.089 21:34:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:35.089 21:34:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:35.089 21:34:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:35.089 21:34:00 -- host/auth.sh@44 -- # digest=sha512 00:19:35.089 21:34:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:35.089 21:34:00 -- host/auth.sh@44 -- # keyid=3 00:19:35.089 21:34:00 -- host/auth.sh@45 -- # key=DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:35.089 21:34:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:35.089 21:34:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:35.089 21:34:00 -- host/auth.sh@49 -- # echo DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:35.089 21:34:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:19:35.089 21:34:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:35.089 21:34:00 -- host/auth.sh@68 -- # digest=sha512 00:19:35.089 21:34:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:35.089 21:34:00 -- host/auth.sh@68 -- # keyid=3 00:19:35.089 21:34:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:35.089 21:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.089 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:19:35.089 21:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.089 21:34:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:35.089 21:34:00 -- nvmf/common.sh@717 -- # local ip 00:19:35.089 21:34:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:35.089 21:34:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:35.089 21:34:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.089 21:34:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.089 21:34:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:35.089 21:34:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.089 21:34:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:35.089 21:34:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:35.089 21:34:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:35.090 21:34:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:35.090 21:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.090 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:19:35.090 nvme0n1 00:19:35.090 21:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.090 21:34:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.090 21:34:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:35.090 21:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.090 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:19:35.090 21:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.090 21:34:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.090 21:34:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.090 21:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.090 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:19:35.090 21:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.090 21:34:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:35.090 21:34:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:35.090 21:34:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:35.090 21:34:00 -- host/auth.sh@44 -- # digest=sha512 00:19:35.090 21:34:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:35.090 21:34:00 -- host/auth.sh@44 -- # keyid=4 00:19:35.090 21:34:00 -- host/auth.sh@45 -- # key=DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:35.090 21:34:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:35.090 21:34:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:35.090 21:34:00 -- host/auth.sh@49 -- # echo DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:35.090 21:34:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:19:35.090 21:34:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:35.090 21:34:00 -- host/auth.sh@68 -- # digest=sha512 00:19:35.090 21:34:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:35.090 21:34:00 -- host/auth.sh@68 -- # keyid=4 00:19:35.090 21:34:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:35.090 21:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.090 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:19:35.090 21:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.090 21:34:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:35.090 21:34:00 -- nvmf/common.sh@717 -- # local ip 00:19:35.090 21:34:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:35.090 21:34:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:35.090 21:34:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.090 21:34:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.090 21:34:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:35.090 21:34:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.090 21:34:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:35.090 21:34:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:35.090 21:34:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:35.090 21:34:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:35.090 21:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.090 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:19:35.348 nvme0n1 00:19:35.348 21:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.348 21:34:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.348 21:34:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:35.348 21:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.348 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:19:35.348 21:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.348 21:34:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.348 21:34:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.348 21:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.348 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:19:35.348 21:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.348 21:34:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.348 21:34:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:35.348 21:34:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:35.348 21:34:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:35.348 21:34:00 -- host/auth.sh@44 -- # digest=sha512 00:19:35.348 21:34:00 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:35.348 21:34:00 -- host/auth.sh@44 -- # keyid=0 00:19:35.348 21:34:00 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:35.348 21:34:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:35.348 21:34:00 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:35.348 21:34:00 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:35.348 21:34:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:19:35.348 21:34:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:35.348 21:34:00 -- host/auth.sh@68 -- # digest=sha512 00:19:35.348 21:34:00 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:35.348 21:34:00 -- host/auth.sh@68 -- # keyid=0 00:19:35.348 21:34:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:35.348 21:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.348 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:19:35.348 21:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.348 21:34:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:35.348 21:34:00 -- nvmf/common.sh@717 -- # local ip 00:19:35.348 21:34:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:35.348 21:34:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:35.348 21:34:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.348 21:34:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.348 21:34:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:35.348 21:34:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.348 21:34:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:35.348 21:34:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:35.348 21:34:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:35.348 21:34:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:35.348 21:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.348 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:19:35.606 nvme0n1 00:19:35.606 21:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.606 21:34:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.606 21:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.606 21:34:01 -- common/autotest_common.sh@10 -- # set +x 00:19:35.606 21:34:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:35.606 21:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.606 21:34:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.606 21:34:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.606 21:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.606 21:34:01 -- common/autotest_common.sh@10 -- # set +x 00:19:35.606 21:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.606 21:34:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:35.606 21:34:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:35.606 21:34:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:35.606 21:34:01 -- host/auth.sh@44 -- # digest=sha512 00:19:35.606 21:34:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:35.606 21:34:01 -- host/auth.sh@44 -- # keyid=1 00:19:35.606 21:34:01 -- host/auth.sh@45 -- # key=DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:35.606 21:34:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:35.606 21:34:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:35.606 21:34:01 -- host/auth.sh@49 -- # echo DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:35.606 21:34:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:19:35.606 21:34:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:35.606 21:34:01 -- host/auth.sh@68 -- # digest=sha512 00:19:35.606 21:34:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:35.606 21:34:01 -- host/auth.sh@68 -- # keyid=1 00:19:35.606 21:34:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:35.606 21:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.606 21:34:01 -- common/autotest_common.sh@10 -- # set +x 00:19:35.606 21:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.606 21:34:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:35.606 21:34:01 -- nvmf/common.sh@717 -- # local ip 00:19:35.606 21:34:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:35.606 21:34:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:35.606 21:34:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.606 21:34:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.606 21:34:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:35.606 21:34:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.606 21:34:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:35.606 21:34:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:35.606 21:34:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:35.606 21:34:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:35.606 21:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.606 21:34:01 -- common/autotest_common.sh@10 -- # set +x 00:19:35.865 nvme0n1 00:19:35.865 21:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.865 21:34:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.865 21:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.865 21:34:01 -- common/autotest_common.sh@10 -- # set +x 00:19:35.865 21:34:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:35.865 21:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.865 21:34:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.865 21:34:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.865 21:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.865 21:34:01 -- common/autotest_common.sh@10 -- # set +x 00:19:35.865 21:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.865 21:34:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:35.865 21:34:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:35.865 21:34:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:35.865 21:34:01 -- host/auth.sh@44 -- # digest=sha512 00:19:35.865 21:34:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:35.865 21:34:01 -- host/auth.sh@44 -- # keyid=2 00:19:35.865 21:34:01 -- host/auth.sh@45 -- # key=DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:35.865 21:34:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:35.865 21:34:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:35.865 21:34:01 -- host/auth.sh@49 -- # echo DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:35.865 21:34:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:19:35.865 21:34:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:35.865 21:34:01 -- host/auth.sh@68 -- # digest=sha512 00:19:35.865 21:34:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:35.865 21:34:01 -- host/auth.sh@68 -- # keyid=2 00:19:35.865 21:34:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:35.865 21:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.865 21:34:01 -- common/autotest_common.sh@10 -- # set +x 00:19:35.865 21:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.865 21:34:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:35.865 21:34:01 -- nvmf/common.sh@717 -- # local ip 00:19:35.865 21:34:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:35.865 21:34:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:35.865 21:34:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.865 21:34:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.865 21:34:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:35.865 21:34:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.865 21:34:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:35.865 21:34:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:35.865 21:34:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:35.865 21:34:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:35.865 21:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.865 21:34:01 -- common/autotest_common.sh@10 -- # set +x 00:19:36.123 nvme0n1 00:19:36.123 21:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.123 21:34:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.123 21:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.123 21:34:01 -- common/autotest_common.sh@10 -- # set +x 00:19:36.123 21:34:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:36.123 21:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.123 21:34:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.123 21:34:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.123 21:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.123 21:34:01 -- common/autotest_common.sh@10 -- # set +x 00:19:36.123 21:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.123 21:34:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:36.123 21:34:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:36.123 21:34:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:36.123 21:34:01 -- host/auth.sh@44 -- # digest=sha512 00:19:36.123 21:34:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:36.123 21:34:01 -- host/auth.sh@44 -- # keyid=3 00:19:36.123 21:34:01 -- host/auth.sh@45 -- # key=DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:36.123 21:34:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:36.123 21:34:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:36.123 21:34:01 -- host/auth.sh@49 -- # echo DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:36.123 21:34:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:19:36.123 21:34:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:36.123 21:34:01 -- host/auth.sh@68 -- # digest=sha512 00:19:36.123 21:34:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:36.123 21:34:01 -- host/auth.sh@68 -- # keyid=3 00:19:36.123 21:34:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:36.123 21:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.123 21:34:01 -- common/autotest_common.sh@10 -- # set +x 00:19:36.123 21:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.123 21:34:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:36.123 21:34:01 -- nvmf/common.sh@717 -- # local ip 00:19:36.123 21:34:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:36.123 21:34:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:36.123 21:34:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.123 21:34:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.123 21:34:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:36.123 21:34:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.123 21:34:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:36.123 21:34:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:36.123 21:34:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:36.123 21:34:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:36.123 21:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.123 21:34:01 -- common/autotest_common.sh@10 -- # set +x 00:19:36.381 nvme0n1 00:19:36.381 21:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.381 21:34:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.381 21:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.381 21:34:01 -- common/autotest_common.sh@10 -- # set +x 00:19:36.381 21:34:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:36.381 21:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.381 21:34:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.381 21:34:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.381 21:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.381 21:34:01 -- common/autotest_common.sh@10 -- # set +x 00:19:36.381 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.381 21:34:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:36.381 21:34:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:36.381 21:34:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:36.381 21:34:02 -- host/auth.sh@44 -- # digest=sha512 00:19:36.381 21:34:02 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:36.381 21:34:02 -- host/auth.sh@44 -- # keyid=4 00:19:36.381 21:34:02 -- host/auth.sh@45 -- # key=DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:36.381 21:34:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:36.381 21:34:02 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:36.381 21:34:02 -- host/auth.sh@49 -- # echo DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:36.381 21:34:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:19:36.381 21:34:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:36.381 21:34:02 -- host/auth.sh@68 -- # digest=sha512 00:19:36.381 21:34:02 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:36.381 21:34:02 -- host/auth.sh@68 -- # keyid=4 00:19:36.381 21:34:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:36.381 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.381 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:19:36.381 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.381 21:34:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:36.381 21:34:02 -- nvmf/common.sh@717 -- # local ip 00:19:36.381 21:34:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:36.381 21:34:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:36.381 21:34:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.381 21:34:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.381 21:34:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:36.381 21:34:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.381 21:34:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:36.381 21:34:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:36.381 21:34:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:36.381 21:34:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:36.381 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.381 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:19:36.639 nvme0n1 00:19:36.639 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.639 21:34:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.639 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.639 21:34:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:36.639 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:19:36.639 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.639 21:34:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.639 21:34:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.639 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.639 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:19:36.639 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.639 21:34:02 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.639 21:34:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:36.639 21:34:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:36.639 21:34:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:36.639 21:34:02 -- host/auth.sh@44 -- # digest=sha512 00:19:36.639 21:34:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:36.639 21:34:02 -- host/auth.sh@44 -- # keyid=0 00:19:36.639 21:34:02 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:36.639 21:34:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:36.639 21:34:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:36.639 21:34:02 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:36.639 21:34:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:19:36.639 21:34:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:36.639 21:34:02 -- host/auth.sh@68 -- # digest=sha512 00:19:36.639 21:34:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:36.639 21:34:02 -- host/auth.sh@68 -- # keyid=0 00:19:36.639 21:34:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:36.639 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.639 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:19:36.639 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.639 21:34:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:36.639 21:34:02 -- nvmf/common.sh@717 -- # local ip 00:19:36.639 21:34:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:36.639 21:34:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:36.639 21:34:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.639 21:34:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.639 21:34:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:36.639 21:34:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.639 21:34:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:36.639 21:34:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:36.639 21:34:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:36.639 21:34:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:36.639 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.639 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:19:36.897 nvme0n1 00:19:36.897 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.897 21:34:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.897 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.897 21:34:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:36.897 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:19:36.897 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.156 21:34:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.156 21:34:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.156 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.156 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:19:37.156 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.156 21:34:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:37.156 21:34:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:37.156 21:34:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:37.156 21:34:02 -- host/auth.sh@44 -- # digest=sha512 00:19:37.156 21:34:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:37.156 21:34:02 -- host/auth.sh@44 -- # keyid=1 00:19:37.156 21:34:02 -- host/auth.sh@45 -- # key=DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:37.156 21:34:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:37.156 21:34:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:37.156 21:34:02 -- host/auth.sh@49 -- # echo DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:37.156 21:34:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:19:37.156 21:34:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:37.156 21:34:02 -- host/auth.sh@68 -- # digest=sha512 00:19:37.156 21:34:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:37.156 21:34:02 -- host/auth.sh@68 -- # keyid=1 00:19:37.156 21:34:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:37.156 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.156 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:19:37.156 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.156 21:34:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:37.156 21:34:02 -- nvmf/common.sh@717 -- # local ip 00:19:37.156 21:34:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:37.156 21:34:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:37.156 21:34:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.156 21:34:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.156 21:34:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:37.156 21:34:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.156 21:34:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:37.156 21:34:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:37.156 21:34:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:37.156 21:34:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:37.156 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.156 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:19:37.414 nvme0n1 00:19:37.414 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.414 21:34:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.414 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.414 21:34:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:37.414 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:19:37.414 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.414 21:34:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.414 21:34:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.414 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.414 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:19:37.414 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.414 21:34:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:37.414 21:34:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:37.414 21:34:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:37.414 21:34:02 -- host/auth.sh@44 -- # digest=sha512 00:19:37.414 21:34:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:37.414 21:34:02 -- host/auth.sh@44 -- # keyid=2 00:19:37.414 21:34:02 -- host/auth.sh@45 -- # key=DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:37.414 21:34:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:37.414 21:34:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:37.414 21:34:02 -- host/auth.sh@49 -- # echo DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:37.414 21:34:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:19:37.414 21:34:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:37.414 21:34:02 -- host/auth.sh@68 -- # digest=sha512 00:19:37.414 21:34:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:37.414 21:34:02 -- host/auth.sh@68 -- # keyid=2 00:19:37.414 21:34:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:37.414 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.414 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:19:37.414 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.414 21:34:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:37.414 21:34:02 -- nvmf/common.sh@717 -- # local ip 00:19:37.414 21:34:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:37.414 21:34:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:37.414 21:34:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.414 21:34:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.414 21:34:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:37.414 21:34:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.414 21:34:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:37.414 21:34:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:37.414 21:34:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:37.414 21:34:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:37.414 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.414 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:19:37.672 nvme0n1 00:19:37.672 21:34:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.672 21:34:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.672 21:34:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.672 21:34:03 -- common/autotest_common.sh@10 -- # set +x 00:19:37.672 21:34:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:37.672 21:34:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.672 21:34:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.672 21:34:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.672 21:34:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.672 21:34:03 -- common/autotest_common.sh@10 -- # set +x 00:19:37.672 21:34:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.672 21:34:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:37.672 21:34:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:37.672 21:34:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:37.672 21:34:03 -- host/auth.sh@44 -- # digest=sha512 00:19:37.672 21:34:03 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:37.672 21:34:03 -- host/auth.sh@44 -- # keyid=3 00:19:37.672 21:34:03 -- host/auth.sh@45 -- # key=DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:37.672 21:34:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:37.672 21:34:03 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:37.672 21:34:03 -- host/auth.sh@49 -- # echo DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:37.672 21:34:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:19:37.672 21:34:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:37.672 21:34:03 -- host/auth.sh@68 -- # digest=sha512 00:19:37.672 21:34:03 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:37.672 21:34:03 -- host/auth.sh@68 -- # keyid=3 00:19:37.672 21:34:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:37.672 21:34:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.672 21:34:03 -- common/autotest_common.sh@10 -- # set +x 00:19:37.672 21:34:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.672 21:34:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:37.672 21:34:03 -- nvmf/common.sh@717 -- # local ip 00:19:37.672 21:34:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:37.672 21:34:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:37.672 21:34:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.672 21:34:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.672 21:34:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:37.672 21:34:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.672 21:34:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:37.672 21:34:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:37.672 21:34:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:37.672 21:34:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:37.672 21:34:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.672 21:34:03 -- common/autotest_common.sh@10 -- # set +x 00:19:38.238 nvme0n1 00:19:38.238 21:34:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.238 21:34:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.238 21:34:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.238 21:34:03 -- common/autotest_common.sh@10 -- # set +x 00:19:38.238 21:34:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:38.238 21:34:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.238 21:34:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.238 21:34:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.238 21:34:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.238 21:34:03 -- common/autotest_common.sh@10 -- # set +x 00:19:38.238 21:34:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.238 21:34:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:38.238 21:34:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:38.238 21:34:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:38.238 21:34:03 -- host/auth.sh@44 -- # digest=sha512 00:19:38.238 21:34:03 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:38.238 21:34:03 -- host/auth.sh@44 -- # keyid=4 00:19:38.238 21:34:03 -- host/auth.sh@45 -- # key=DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:38.238 21:34:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:38.238 21:34:03 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:38.238 21:34:03 -- host/auth.sh@49 -- # echo DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:38.238 21:34:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:19:38.238 21:34:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:38.238 21:34:03 -- host/auth.sh@68 -- # digest=sha512 00:19:38.238 21:34:03 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:38.238 21:34:03 -- host/auth.sh@68 -- # keyid=4 00:19:38.238 21:34:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:38.238 21:34:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.238 21:34:03 -- common/autotest_common.sh@10 -- # set +x 00:19:38.238 21:34:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.238 21:34:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:38.238 21:34:03 -- nvmf/common.sh@717 -- # local ip 00:19:38.238 21:34:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:38.238 21:34:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:38.238 21:34:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.238 21:34:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.238 21:34:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:38.238 21:34:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.238 21:34:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:38.238 21:34:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:38.238 21:34:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:38.238 21:34:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:38.238 21:34:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.238 21:34:03 -- common/autotest_common.sh@10 -- # set +x 00:19:38.496 nvme0n1 00:19:38.496 21:34:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.496 21:34:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.496 21:34:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.496 21:34:04 -- common/autotest_common.sh@10 -- # set +x 00:19:38.496 21:34:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:38.496 21:34:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.496 21:34:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.496 21:34:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.496 21:34:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.496 21:34:04 -- common/autotest_common.sh@10 -- # set +x 00:19:38.496 21:34:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.496 21:34:04 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.496 21:34:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:38.496 21:34:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:38.496 21:34:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:38.496 21:34:04 -- host/auth.sh@44 -- # digest=sha512 00:19:38.496 21:34:04 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:38.496 21:34:04 -- host/auth.sh@44 -- # keyid=0 00:19:38.496 21:34:04 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:38.496 21:34:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:38.496 21:34:04 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:38.496 21:34:04 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:38.496 21:34:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:19:38.496 21:34:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:38.496 21:34:04 -- host/auth.sh@68 -- # digest=sha512 00:19:38.496 21:34:04 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:38.496 21:34:04 -- host/auth.sh@68 -- # keyid=0 00:19:38.496 21:34:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:38.496 21:34:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.496 21:34:04 -- common/autotest_common.sh@10 -- # set +x 00:19:38.496 21:34:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.496 21:34:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:38.496 21:34:04 -- nvmf/common.sh@717 -- # local ip 00:19:38.496 21:34:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:38.496 21:34:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:38.496 21:34:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.496 21:34:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.496 21:34:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:38.496 21:34:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.496 21:34:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:38.496 21:34:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:38.496 21:34:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:38.496 21:34:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:38.496 21:34:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.496 21:34:04 -- common/autotest_common.sh@10 -- # set +x 00:19:39.060 nvme0n1 00:19:39.060 21:34:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.060 21:34:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.060 21:34:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.060 21:34:04 -- common/autotest_common.sh@10 -- # set +x 00:19:39.060 21:34:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:39.060 21:34:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.060 21:34:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.060 21:34:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.060 21:34:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.060 21:34:04 -- common/autotest_common.sh@10 -- # set +x 00:19:39.060 21:34:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.060 21:34:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:39.060 21:34:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:39.060 21:34:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:39.060 21:34:04 -- host/auth.sh@44 -- # digest=sha512 00:19:39.060 21:34:04 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:39.060 21:34:04 -- host/auth.sh@44 -- # keyid=1 00:19:39.060 21:34:04 -- host/auth.sh@45 -- # key=DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:39.060 21:34:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:39.060 21:34:04 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:39.060 21:34:04 -- host/auth.sh@49 -- # echo DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:39.060 21:34:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:19:39.060 21:34:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:39.060 21:34:04 -- host/auth.sh@68 -- # digest=sha512 00:19:39.060 21:34:04 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:39.060 21:34:04 -- host/auth.sh@68 -- # keyid=1 00:19:39.060 21:34:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:39.060 21:34:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.060 21:34:04 -- common/autotest_common.sh@10 -- # set +x 00:19:39.060 21:34:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.060 21:34:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:39.060 21:34:04 -- nvmf/common.sh@717 -- # local ip 00:19:39.060 21:34:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:39.060 21:34:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:39.060 21:34:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.060 21:34:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.060 21:34:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:39.060 21:34:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.060 21:34:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:39.060 21:34:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:39.060 21:34:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:39.060 21:34:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:39.060 21:34:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.060 21:34:04 -- common/autotest_common.sh@10 -- # set +x 00:19:39.623 nvme0n1 00:19:39.623 21:34:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.623 21:34:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.623 21:34:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.623 21:34:05 -- common/autotest_common.sh@10 -- # set +x 00:19:39.623 21:34:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:39.623 21:34:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.623 21:34:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.623 21:34:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.623 21:34:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.623 21:34:05 -- common/autotest_common.sh@10 -- # set +x 00:19:39.881 21:34:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.881 21:34:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:39.881 21:34:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:39.881 21:34:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:39.881 21:34:05 -- host/auth.sh@44 -- # digest=sha512 00:19:39.881 21:34:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:39.881 21:34:05 -- host/auth.sh@44 -- # keyid=2 00:19:39.881 21:34:05 -- host/auth.sh@45 -- # key=DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:39.881 21:34:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:39.881 21:34:05 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:39.881 21:34:05 -- host/auth.sh@49 -- # echo DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:39.881 21:34:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:19:39.881 21:34:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:39.881 21:34:05 -- host/auth.sh@68 -- # digest=sha512 00:19:39.881 21:34:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:39.881 21:34:05 -- host/auth.sh@68 -- # keyid=2 00:19:39.881 21:34:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:39.881 21:34:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.881 21:34:05 -- common/autotest_common.sh@10 -- # set +x 00:19:39.881 21:34:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.881 21:34:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:39.881 21:34:05 -- nvmf/common.sh@717 -- # local ip 00:19:39.881 21:34:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:39.881 21:34:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:39.882 21:34:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.882 21:34:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.882 21:34:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:39.882 21:34:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.882 21:34:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:39.882 21:34:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:39.882 21:34:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:39.882 21:34:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:39.882 21:34:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.882 21:34:05 -- common/autotest_common.sh@10 -- # set +x 00:19:40.140 nvme0n1 00:19:40.140 21:34:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.140 21:34:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.140 21:34:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.140 21:34:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:40.140 21:34:05 -- common/autotest_common.sh@10 -- # set +x 00:19:40.140 21:34:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.397 21:34:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.397 21:34:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.397 21:34:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.397 21:34:05 -- common/autotest_common.sh@10 -- # set +x 00:19:40.397 21:34:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.397 21:34:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:40.397 21:34:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:40.398 21:34:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:40.398 21:34:05 -- host/auth.sh@44 -- # digest=sha512 00:19:40.398 21:34:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:40.398 21:34:05 -- host/auth.sh@44 -- # keyid=3 00:19:40.398 21:34:05 -- host/auth.sh@45 -- # key=DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:40.398 21:34:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:40.398 21:34:05 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:40.398 21:34:05 -- host/auth.sh@49 -- # echo DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:40.398 21:34:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:19:40.398 21:34:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:40.398 21:34:05 -- host/auth.sh@68 -- # digest=sha512 00:19:40.398 21:34:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:40.398 21:34:05 -- host/auth.sh@68 -- # keyid=3 00:19:40.398 21:34:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:40.398 21:34:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.398 21:34:05 -- common/autotest_common.sh@10 -- # set +x 00:19:40.398 21:34:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.398 21:34:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:40.398 21:34:05 -- nvmf/common.sh@717 -- # local ip 00:19:40.398 21:34:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:40.398 21:34:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:40.398 21:34:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.398 21:34:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.398 21:34:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:40.398 21:34:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.398 21:34:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:40.398 21:34:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:40.398 21:34:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:40.398 21:34:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:40.398 21:34:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.398 21:34:05 -- common/autotest_common.sh@10 -- # set +x 00:19:40.971 nvme0n1 00:19:40.971 21:34:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.971 21:34:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.971 21:34:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.971 21:34:06 -- common/autotest_common.sh@10 -- # set +x 00:19:40.971 21:34:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:40.971 21:34:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.971 21:34:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.971 21:34:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.971 21:34:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.971 21:34:06 -- common/autotest_common.sh@10 -- # set +x 00:19:40.971 21:34:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.971 21:34:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:40.971 21:34:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:40.971 21:34:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:40.971 21:34:06 -- host/auth.sh@44 -- # digest=sha512 00:19:40.971 21:34:06 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:40.971 21:34:06 -- host/auth.sh@44 -- # keyid=4 00:19:40.971 21:34:06 -- host/auth.sh@45 -- # key=DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:40.971 21:34:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:40.971 21:34:06 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:40.971 21:34:06 -- host/auth.sh@49 -- # echo DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:40.971 21:34:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:19:40.971 21:34:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:40.971 21:34:06 -- host/auth.sh@68 -- # digest=sha512 00:19:40.971 21:34:06 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:40.971 21:34:06 -- host/auth.sh@68 -- # keyid=4 00:19:40.971 21:34:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:40.971 21:34:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.971 21:34:06 -- common/autotest_common.sh@10 -- # set +x 00:19:40.971 21:34:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.971 21:34:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:40.971 21:34:06 -- nvmf/common.sh@717 -- # local ip 00:19:40.971 21:34:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:40.971 21:34:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:40.971 21:34:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.971 21:34:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.971 21:34:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:40.971 21:34:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.971 21:34:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:40.971 21:34:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:40.971 21:34:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:40.971 21:34:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:40.971 21:34:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.971 21:34:06 -- common/autotest_common.sh@10 -- # set +x 00:19:41.537 nvme0n1 00:19:41.537 21:34:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.537 21:34:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.537 21:34:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.537 21:34:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:41.537 21:34:06 -- common/autotest_common.sh@10 -- # set +x 00:19:41.537 21:34:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.537 21:34:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.537 21:34:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.537 21:34:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.537 21:34:07 -- common/autotest_common.sh@10 -- # set +x 00:19:41.537 21:34:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.537 21:34:07 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.537 21:34:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:41.537 21:34:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:41.537 21:34:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:41.537 21:34:07 -- host/auth.sh@44 -- # digest=sha512 00:19:41.537 21:34:07 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:41.537 21:34:07 -- host/auth.sh@44 -- # keyid=0 00:19:41.537 21:34:07 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:41.537 21:34:07 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:41.537 21:34:07 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:41.537 21:34:07 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVhNzkwYmI1ZThlY2UzNDQyZmJmY2ZjOWY1ZjczNTRJ7dsA: 00:19:41.537 21:34:07 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:19:41.537 21:34:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:41.537 21:34:07 -- host/auth.sh@68 -- # digest=sha512 00:19:41.537 21:34:07 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:41.537 21:34:07 -- host/auth.sh@68 -- # keyid=0 00:19:41.537 21:34:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:41.537 21:34:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.537 21:34:07 -- common/autotest_common.sh@10 -- # set +x 00:19:41.537 21:34:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.537 21:34:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:41.537 21:34:07 -- nvmf/common.sh@717 -- # local ip 00:19:41.537 21:34:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:41.537 21:34:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:41.537 21:34:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.537 21:34:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.537 21:34:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:41.537 21:34:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.537 21:34:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:41.537 21:34:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:41.537 21:34:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:41.537 21:34:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:41.537 21:34:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.537 21:34:07 -- common/autotest_common.sh@10 -- # set +x 00:19:42.471 nvme0n1 00:19:42.471 21:34:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.471 21:34:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.471 21:34:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.471 21:34:08 -- common/autotest_common.sh@10 -- # set +x 00:19:42.471 21:34:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:42.471 21:34:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.471 21:34:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.471 21:34:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.471 21:34:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.471 21:34:08 -- common/autotest_common.sh@10 -- # set +x 00:19:42.471 21:34:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.471 21:34:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:42.471 21:34:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:42.471 21:34:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:42.471 21:34:08 -- host/auth.sh@44 -- # digest=sha512 00:19:42.471 21:34:08 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:42.471 21:34:08 -- host/auth.sh@44 -- # keyid=1 00:19:42.471 21:34:08 -- host/auth.sh@45 -- # key=DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:42.471 21:34:08 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:42.471 21:34:08 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:42.471 21:34:08 -- host/auth.sh@49 -- # echo DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:42.471 21:34:08 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:19:42.471 21:34:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:42.471 21:34:08 -- host/auth.sh@68 -- # digest=sha512 00:19:42.471 21:34:08 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:42.471 21:34:08 -- host/auth.sh@68 -- # keyid=1 00:19:42.471 21:34:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.471 21:34:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.471 21:34:08 -- common/autotest_common.sh@10 -- # set +x 00:19:42.471 21:34:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.472 21:34:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:42.472 21:34:08 -- nvmf/common.sh@717 -- # local ip 00:19:42.472 21:34:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:42.472 21:34:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:42.472 21:34:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.472 21:34:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.472 21:34:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:42.472 21:34:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.472 21:34:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:42.472 21:34:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:42.472 21:34:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:42.472 21:34:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:42.472 21:34:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.472 21:34:08 -- common/autotest_common.sh@10 -- # set +x 00:19:43.406 nvme0n1 00:19:43.406 21:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.406 21:34:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.406 21:34:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:43.406 21:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.406 21:34:09 -- common/autotest_common.sh@10 -- # set +x 00:19:43.406 21:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.406 21:34:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.406 21:34:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.406 21:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.406 21:34:09 -- common/autotest_common.sh@10 -- # set +x 00:19:43.664 21:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.664 21:34:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:43.665 21:34:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:19:43.665 21:34:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:43.665 21:34:09 -- host/auth.sh@44 -- # digest=sha512 00:19:43.665 21:34:09 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:43.665 21:34:09 -- host/auth.sh@44 -- # keyid=2 00:19:43.665 21:34:09 -- host/auth.sh@45 -- # key=DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:43.665 21:34:09 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:43.665 21:34:09 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:43.665 21:34:09 -- host/auth.sh@49 -- # echo DHHC-1:01:NWRiYTdlZDk0YjE5NTdiMzI5MWMwOTdmYjlhOTYzMDSbFOfl: 00:19:43.665 21:34:09 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:19:43.665 21:34:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:43.665 21:34:09 -- host/auth.sh@68 -- # digest=sha512 00:19:43.665 21:34:09 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:43.665 21:34:09 -- host/auth.sh@68 -- # keyid=2 00:19:43.665 21:34:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:43.665 21:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.665 21:34:09 -- common/autotest_common.sh@10 -- # set +x 00:19:43.665 21:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.665 21:34:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:43.665 21:34:09 -- nvmf/common.sh@717 -- # local ip 00:19:43.665 21:34:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:43.665 21:34:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:43.665 21:34:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.665 21:34:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.665 21:34:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:43.665 21:34:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.665 21:34:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:43.665 21:34:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:43.665 21:34:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:43.665 21:34:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:43.665 21:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.665 21:34:09 -- common/autotest_common.sh@10 -- # set +x 00:19:44.607 nvme0n1 00:19:44.607 21:34:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.607 21:34:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.607 21:34:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.607 21:34:10 -- common/autotest_common.sh@10 -- # set +x 00:19:44.607 21:34:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:44.607 21:34:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.607 21:34:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.607 21:34:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.607 21:34:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.607 21:34:10 -- common/autotest_common.sh@10 -- # set +x 00:19:44.607 21:34:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.607 21:34:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:44.607 21:34:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:19:44.607 21:34:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:44.607 21:34:10 -- host/auth.sh@44 -- # digest=sha512 00:19:44.607 21:34:10 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:44.607 21:34:10 -- host/auth.sh@44 -- # keyid=3 00:19:44.607 21:34:10 -- host/auth.sh@45 -- # key=DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:44.607 21:34:10 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:44.607 21:34:10 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:44.607 21:34:10 -- host/auth.sh@49 -- # echo DHHC-1:02:ZDFlN2U0YzU1Y2JkYTdlZmEwNzFmYjM5MTM2NDcxMDdmYzU4MjVlMmUyZGYxNWQ293l/+w==: 00:19:44.607 21:34:10 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:19:44.607 21:34:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:44.607 21:34:10 -- host/auth.sh@68 -- # digest=sha512 00:19:44.607 21:34:10 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:44.607 21:34:10 -- host/auth.sh@68 -- # keyid=3 00:19:44.607 21:34:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:44.607 21:34:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.607 21:34:10 -- common/autotest_common.sh@10 -- # set +x 00:19:44.607 21:34:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.607 21:34:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:44.607 21:34:10 -- nvmf/common.sh@717 -- # local ip 00:19:44.607 21:34:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:44.607 21:34:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:44.607 21:34:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.607 21:34:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.607 21:34:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:44.607 21:34:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.607 21:34:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:44.607 21:34:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:44.607 21:34:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:44.607 21:34:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:44.607 21:34:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.607 21:34:10 -- common/autotest_common.sh@10 -- # set +x 00:19:45.541 nvme0n1 00:19:45.541 21:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.541 21:34:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.541 21:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.541 21:34:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:45.541 21:34:11 -- common/autotest_common.sh@10 -- # set +x 00:19:45.541 21:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.541 21:34:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.541 21:34:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.541 21:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.541 21:34:11 -- common/autotest_common.sh@10 -- # set +x 00:19:45.541 21:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.541 21:34:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:45.541 21:34:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:45.541 21:34:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:45.541 21:34:11 -- host/auth.sh@44 -- # digest=sha512 00:19:45.541 21:34:11 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:45.541 21:34:11 -- host/auth.sh@44 -- # keyid=4 00:19:45.541 21:34:11 -- host/auth.sh@45 -- # key=DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:45.541 21:34:11 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:45.541 21:34:11 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:45.541 21:34:11 -- host/auth.sh@49 -- # echo DHHC-1:03:OTYxNWQ0YmUwNjk4YzNkNzE2YmVhMWMzMDVjOTMyYjdlZjlkODFmOGE2NWNkZWMwNzZjZjFlY2ZmM2Y2YjgxZTK7Sxg=: 00:19:45.541 21:34:11 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:19:45.541 21:34:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:45.541 21:34:11 -- host/auth.sh@68 -- # digest=sha512 00:19:45.541 21:34:11 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:45.541 21:34:11 -- host/auth.sh@68 -- # keyid=4 00:19:45.541 21:34:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:45.541 21:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.541 21:34:11 -- common/autotest_common.sh@10 -- # set +x 00:19:45.541 21:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.541 21:34:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:45.541 21:34:11 -- nvmf/common.sh@717 -- # local ip 00:19:45.541 21:34:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:45.541 21:34:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:45.541 21:34:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.541 21:34:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.541 21:34:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:45.541 21:34:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.541 21:34:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:45.541 21:34:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:45.541 21:34:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:45.541 21:34:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:45.541 21:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.541 21:34:11 -- common/autotest_common.sh@10 -- # set +x 00:19:46.475 nvme0n1 00:19:46.475 21:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.475 21:34:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.475 21:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.475 21:34:12 -- common/autotest_common.sh@10 -- # set +x 00:19:46.475 21:34:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:46.475 21:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.735 21:34:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.735 21:34:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.735 21:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.735 21:34:12 -- common/autotest_common.sh@10 -- # set +x 00:19:46.735 21:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.735 21:34:12 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:46.735 21:34:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:46.735 21:34:12 -- host/auth.sh@44 -- # digest=sha256 00:19:46.735 21:34:12 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:46.735 21:34:12 -- host/auth.sh@44 -- # keyid=1 00:19:46.735 21:34:12 -- host/auth.sh@45 -- # key=DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:46.735 21:34:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:46.735 21:34:12 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:46.735 21:34:12 -- host/auth.sh@49 -- # echo DHHC-1:00:NWYxZjA3ZTlkYzBjZjc0MWI3MTcxOWNhYzRhZGYwM2VhYThjOWVhODBkNzY5ZDU3qUrQdA==: 00:19:46.735 21:34:12 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:46.735 21:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.735 21:34:12 -- common/autotest_common.sh@10 -- # set +x 00:19:46.735 21:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.735 21:34:12 -- host/auth.sh@119 -- # get_main_ns_ip 00:19:46.735 21:34:12 -- nvmf/common.sh@717 -- # local ip 00:19:46.735 21:34:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:46.735 21:34:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:46.735 21:34:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.735 21:34:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.735 21:34:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:46.735 21:34:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:46.735 21:34:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:46.735 21:34:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:46.735 21:34:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:46.735 21:34:12 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:46.735 21:34:12 -- common/autotest_common.sh@638 -- # local es=0 00:19:46.735 21:34:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:46.735 21:34:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:46.735 21:34:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:46.735 21:34:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:46.735 21:34:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:46.735 21:34:12 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:46.735 21:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.735 21:34:12 -- common/autotest_common.sh@10 -- # set +x 00:19:46.735 request: 00:19:46.735 { 00:19:46.735 "name": "nvme0", 00:19:46.735 "trtype": "tcp", 00:19:46.735 "traddr": "10.0.0.1", 00:19:46.735 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:46.735 "adrfam": "ipv4", 00:19:46.735 "trsvcid": "4420", 00:19:46.735 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:46.735 "method": "bdev_nvme_attach_controller", 00:19:46.735 "req_id": 1 00:19:46.735 } 00:19:46.735 Got JSON-RPC error response 00:19:46.735 response: 00:19:46.735 { 00:19:46.735 "code": -32602, 00:19:46.735 "message": "Invalid parameters" 00:19:46.735 } 00:19:46.735 21:34:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:46.735 21:34:12 -- common/autotest_common.sh@641 -- # es=1 00:19:46.735 21:34:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:46.735 21:34:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:46.735 21:34:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:46.735 21:34:12 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.735 21:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.735 21:34:12 -- host/auth.sh@121 -- # jq length 00:19:46.735 21:34:12 -- common/autotest_common.sh@10 -- # set +x 00:19:46.735 21:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.735 21:34:12 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:19:46.735 21:34:12 -- host/auth.sh@124 -- # get_main_ns_ip 00:19:46.735 21:34:12 -- nvmf/common.sh@717 -- # local ip 00:19:46.735 21:34:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:46.735 21:34:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:46.735 21:34:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.735 21:34:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.735 21:34:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:46.735 21:34:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:46.735 21:34:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:46.735 21:34:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:46.735 21:34:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:46.735 21:34:12 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:46.735 21:34:12 -- common/autotest_common.sh@638 -- # local es=0 00:19:46.735 21:34:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:46.736 21:34:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:46.736 21:34:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:46.736 21:34:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:46.736 21:34:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:46.736 21:34:12 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:46.736 21:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.736 21:34:12 -- common/autotest_common.sh@10 -- # set +x 00:19:46.736 request: 00:19:46.736 { 00:19:46.736 "name": "nvme0", 00:19:46.736 "trtype": "tcp", 00:19:46.736 "traddr": "10.0.0.1", 00:19:46.736 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:46.736 "adrfam": "ipv4", 00:19:46.736 "trsvcid": "4420", 00:19:46.736 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:46.736 "dhchap_key": "key2", 00:19:46.736 "method": "bdev_nvme_attach_controller", 00:19:46.736 "req_id": 1 00:19:46.736 } 00:19:46.736 Got JSON-RPC error response 00:19:46.736 response: 00:19:46.736 { 00:19:46.736 "code": -32602, 00:19:46.736 "message": "Invalid parameters" 00:19:46.736 } 00:19:46.736 21:34:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:46.736 21:34:12 -- common/autotest_common.sh@641 -- # es=1 00:19:46.736 21:34:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:46.736 21:34:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:46.736 21:34:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:46.736 21:34:12 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.736 21:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.736 21:34:12 -- common/autotest_common.sh@10 -- # set +x 00:19:46.736 21:34:12 -- host/auth.sh@127 -- # jq length 00:19:46.736 21:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.995 21:34:12 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:19:46.995 21:34:12 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:19:46.995 21:34:12 -- host/auth.sh@130 -- # cleanup 00:19:46.995 21:34:12 -- host/auth.sh@24 -- # nvmftestfini 00:19:46.995 21:34:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:46.995 21:34:12 -- nvmf/common.sh@117 -- # sync 00:19:46.995 21:34:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:46.995 21:34:12 -- nvmf/common.sh@120 -- # set +e 00:19:46.995 21:34:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:46.995 21:34:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:46.995 rmmod nvme_tcp 00:19:46.995 rmmod nvme_fabrics 00:19:46.995 21:34:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:46.995 21:34:12 -- nvmf/common.sh@124 -- # set -e 00:19:46.995 21:34:12 -- nvmf/common.sh@125 -- # return 0 00:19:46.995 21:34:12 -- nvmf/common.sh@478 -- # '[' -n 2661873 ']' 00:19:46.995 21:34:12 -- nvmf/common.sh@479 -- # killprocess 2661873 00:19:46.995 21:34:12 -- common/autotest_common.sh@936 -- # '[' -z 2661873 ']' 00:19:46.995 21:34:12 -- common/autotest_common.sh@940 -- # kill -0 2661873 00:19:46.995 21:34:12 -- common/autotest_common.sh@941 -- # uname 00:19:46.995 21:34:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:46.995 21:34:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2661873 00:19:46.995 21:34:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:46.995 21:34:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:46.995 21:34:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2661873' 00:19:46.995 killing process with pid 2661873 00:19:46.995 21:34:12 -- common/autotest_common.sh@955 -- # kill 2661873 00:19:46.995 21:34:12 -- common/autotest_common.sh@960 -- # wait 2661873 00:19:47.254 21:34:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:47.254 21:34:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:47.254 21:34:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:47.254 21:34:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.254 21:34:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:47.254 21:34:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.254 21:34:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.254 21:34:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.167 21:34:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:49.167 21:34:14 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:49.167 21:34:14 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:49.167 21:34:14 -- host/auth.sh@27 -- # clean_kernel_target 00:19:49.167 21:34:14 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:19:49.167 21:34:14 -- nvmf/common.sh@675 -- # echo 0 00:19:49.167 21:34:14 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:49.167 21:34:14 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:49.167 21:34:14 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:49.167 21:34:14 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:49.167 21:34:14 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:19:49.167 21:34:14 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:19:49.427 21:34:14 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:19:50.361 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:19:50.361 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:19:50.619 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:19:50.619 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:19:50.619 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:19:50.619 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:19:50.619 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:19:50.619 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:19:50.619 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:19:50.619 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:19:50.619 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:19:50.619 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:19:50.619 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:19:50.619 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:19:50.619 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:19:50.619 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:19:51.553 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:19:51.553 21:34:17 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.HXy /tmp/spdk.key-null.pn3 /tmp/spdk.key-sha256.PZ6 /tmp/spdk.key-sha384.yXc /tmp/spdk.key-sha512.onu /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:19:51.553 21:34:17 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:19:52.928 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:19:52.928 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:19:52.928 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:19:52.928 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:19:52.928 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:19:52.928 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:19:52.928 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:19:52.928 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:19:52.928 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:19:52.928 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:19:52.928 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:19:52.928 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:19:52.928 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:19:52.928 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:19:52.928 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:19:52.928 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:19:52.928 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:19:52.928 00:19:52.928 real 0m49.001s 00:19:52.928 user 0m46.632s 00:19:52.928 sys 0m5.569s 00:19:52.928 21:34:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:52.928 21:34:18 -- common/autotest_common.sh@10 -- # set +x 00:19:52.928 ************************************ 00:19:52.928 END TEST nvmf_auth 00:19:52.928 ************************************ 00:19:52.928 21:34:18 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:19:52.928 21:34:18 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:52.928 21:34:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:52.928 21:34:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:52.928 21:34:18 -- common/autotest_common.sh@10 -- # set +x 00:19:52.928 ************************************ 00:19:52.928 START TEST nvmf_digest 00:19:52.928 ************************************ 00:19:52.928 21:34:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:52.928 * Looking for test storage... 00:19:52.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:52.928 21:34:18 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:52.928 21:34:18 -- nvmf/common.sh@7 -- # uname -s 00:19:52.928 21:34:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.928 21:34:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.928 21:34:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.928 21:34:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.928 21:34:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.928 21:34:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.928 21:34:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.928 21:34:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.928 21:34:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.928 21:34:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.928 21:34:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.928 21:34:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.928 21:34:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.928 21:34:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.928 21:34:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:52.928 21:34:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.928 21:34:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:52.928 21:34:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.928 21:34:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.928 21:34:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.928 21:34:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.928 21:34:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.928 21:34:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.928 21:34:18 -- paths/export.sh@5 -- # export PATH 00:19:52.928 21:34:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.928 21:34:18 -- nvmf/common.sh@47 -- # : 0 00:19:52.928 21:34:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:52.928 21:34:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:52.928 21:34:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.928 21:34:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.928 21:34:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.928 21:34:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:52.928 21:34:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:52.928 21:34:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:52.928 21:34:18 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:52.928 21:34:18 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:19:52.928 21:34:18 -- host/digest.sh@16 -- # runtime=2 00:19:52.928 21:34:18 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:19:52.928 21:34:18 -- host/digest.sh@138 -- # nvmftestinit 00:19:52.928 21:34:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:52.928 21:34:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.928 21:34:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:52.928 21:34:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:52.928 21:34:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:52.928 21:34:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.928 21:34:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.928 21:34:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.928 21:34:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:52.928 21:34:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:52.928 21:34:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:52.928 21:34:18 -- common/autotest_common.sh@10 -- # set +x 00:19:55.471 21:34:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:55.471 21:34:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:55.471 21:34:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:55.471 21:34:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:55.471 21:34:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:55.471 21:34:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:55.471 21:34:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:55.471 21:34:20 -- nvmf/common.sh@295 -- # net_devs=() 00:19:55.471 21:34:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:55.471 21:34:20 -- nvmf/common.sh@296 -- # e810=() 00:19:55.471 21:34:20 -- nvmf/common.sh@296 -- # local -ga e810 00:19:55.471 21:34:20 -- nvmf/common.sh@297 -- # x722=() 00:19:55.471 21:34:20 -- nvmf/common.sh@297 -- # local -ga x722 00:19:55.471 21:34:20 -- nvmf/common.sh@298 -- # mlx=() 00:19:55.471 21:34:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:55.471 21:34:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.471 21:34:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.471 21:34:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.471 21:34:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.471 21:34:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.471 21:34:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.471 21:34:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.471 21:34:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.471 21:34:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.471 21:34:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.471 21:34:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.471 21:34:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:55.471 21:34:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:55.471 21:34:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:55.471 21:34:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:55.471 21:34:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:55.471 21:34:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:55.471 21:34:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.471 21:34:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:55.471 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:55.471 21:34:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.471 21:34:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.471 21:34:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.471 21:34:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.471 21:34:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.471 21:34:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.471 21:34:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:55.471 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:55.471 21:34:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.471 21:34:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.471 21:34:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.471 21:34:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.471 21:34:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.471 21:34:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:55.471 21:34:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:55.471 21:34:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:55.471 21:34:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.471 21:34:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.471 21:34:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:55.471 21:34:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.471 21:34:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:55.471 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:55.471 21:34:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.471 21:34:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.471 21:34:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.471 21:34:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:55.471 21:34:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.471 21:34:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:55.471 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:55.471 21:34:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.471 21:34:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:55.471 21:34:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:55.471 21:34:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:55.471 21:34:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:55.471 21:34:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:55.471 21:34:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.471 21:34:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:55.471 21:34:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:55.471 21:34:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:55.471 21:34:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:55.471 21:34:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:55.471 21:34:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:55.471 21:34:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:55.471 21:34:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.471 21:34:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:55.471 21:34:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:55.471 21:34:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:55.471 21:34:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:55.471 21:34:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:55.471 21:34:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:55.471 21:34:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:55.471 21:34:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:55.471 21:34:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:55.471 21:34:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:55.471 21:34:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:55.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:19:55.471 00:19:55.471 --- 10.0.0.2 ping statistics --- 00:19:55.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.471 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:19:55.471 21:34:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:55.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:19:55.471 00:19:55.471 --- 10.0.0.1 ping statistics --- 00:19:55.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.471 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:19:55.471 21:34:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.471 21:34:20 -- nvmf/common.sh@411 -- # return 0 00:19:55.471 21:34:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:55.471 21:34:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:55.471 21:34:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:55.471 21:34:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:55.471 21:34:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:55.471 21:34:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:55.471 21:34:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:55.471 21:34:20 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:55.471 21:34:20 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:19:55.471 21:34:20 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:19:55.471 21:34:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:55.471 21:34:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:55.471 21:34:20 -- common/autotest_common.sh@10 -- # set +x 00:19:55.471 ************************************ 00:19:55.471 START TEST nvmf_digest_clean 00:19:55.471 ************************************ 00:19:55.471 21:34:20 -- common/autotest_common.sh@1111 -- # run_digest 00:19:55.471 21:34:20 -- host/digest.sh@120 -- # local dsa_initiator 00:19:55.471 21:34:20 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:19:55.471 21:34:20 -- host/digest.sh@121 -- # dsa_initiator=false 00:19:55.471 21:34:20 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:19:55.471 21:34:20 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:19:55.472 21:34:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:55.472 21:34:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:55.472 21:34:20 -- common/autotest_common.sh@10 -- # set +x 00:19:55.472 21:34:20 -- nvmf/common.sh@470 -- # nvmfpid=2671195 00:19:55.472 21:34:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:55.472 21:34:20 -- nvmf/common.sh@471 -- # waitforlisten 2671195 00:19:55.472 21:34:20 -- common/autotest_common.sh@817 -- # '[' -z 2671195 ']' 00:19:55.472 21:34:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.472 21:34:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:55.472 21:34:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.472 21:34:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:55.472 21:34:20 -- common/autotest_common.sh@10 -- # set +x 00:19:55.472 [2024-04-24 21:34:20.883382] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:19:55.472 [2024-04-24 21:34:20.883462] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.472 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.472 [2024-04-24 21:34:20.949482] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.472 [2024-04-24 21:34:21.057181] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.472 [2024-04-24 21:34:21.057235] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.472 [2024-04-24 21:34:21.057248] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.472 [2024-04-24 21:34:21.057260] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.472 [2024-04-24 21:34:21.057270] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.472 [2024-04-24 21:34:21.057305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.472 21:34:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:55.472 21:34:21 -- common/autotest_common.sh@850 -- # return 0 00:19:55.472 21:34:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:55.472 21:34:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:55.472 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:19:55.472 21:34:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.472 21:34:21 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:19:55.472 21:34:21 -- host/digest.sh@126 -- # common_target_config 00:19:55.472 21:34:21 -- host/digest.sh@43 -- # rpc_cmd 00:19:55.472 21:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.472 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:19:55.751 null0 00:19:55.751 [2024-04-24 21:34:21.216548] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.751 [2024-04-24 21:34:21.240765] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.751 21:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.751 21:34:21 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:19:55.751 21:34:21 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:55.751 21:34:21 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:55.751 21:34:21 -- host/digest.sh@80 -- # rw=randread 00:19:55.751 21:34:21 -- host/digest.sh@80 -- # bs=4096 00:19:55.751 21:34:21 -- host/digest.sh@80 -- # qd=128 00:19:55.751 21:34:21 -- host/digest.sh@80 -- # scan_dsa=false 00:19:55.751 21:34:21 -- host/digest.sh@83 -- # bperfpid=2671224 00:19:55.751 21:34:21 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:55.751 21:34:21 -- host/digest.sh@84 -- # waitforlisten 2671224 /var/tmp/bperf.sock 00:19:55.751 21:34:21 -- common/autotest_common.sh@817 -- # '[' -z 2671224 ']' 00:19:55.751 21:34:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:55.751 21:34:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:55.751 21:34:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:55.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:55.751 21:34:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:55.751 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:19:55.751 [2024-04-24 21:34:21.289645] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:19:55.751 [2024-04-24 21:34:21.289733] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671224 ] 00:19:55.751 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.751 [2024-04-24 21:34:21.357152] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.010 [2024-04-24 21:34:21.475849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.944 21:34:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:56.944 21:34:22 -- common/autotest_common.sh@850 -- # return 0 00:19:56.944 21:34:22 -- host/digest.sh@86 -- # false 00:19:56.944 21:34:22 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:56.944 21:34:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:56.944 21:34:22 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:56.944 21:34:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:57.510 nvme0n1 00:19:57.510 21:34:23 -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:57.510 21:34:23 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:57.510 Running I/O for 2 seconds... 00:20:00.038 00:20:00.038 Latency(us) 00:20:00.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.038 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:00.038 nvme0n1 : 2.00 18313.88 71.54 0.00 0.00 6979.64 3907.89 13689.74 00:20:00.038 =================================================================================================================== 00:20:00.038 Total : 18313.88 71.54 0.00 0.00 6979.64 3907.89 13689.74 00:20:00.038 0 00:20:00.038 21:34:25 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:00.038 21:34:25 -- host/digest.sh@93 -- # get_accel_stats 00:20:00.038 21:34:25 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:00.038 21:34:25 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:00.038 | select(.opcode=="crc32c") 00:20:00.038 | "\(.module_name) \(.executed)"' 00:20:00.038 21:34:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:00.038 21:34:25 -- host/digest.sh@94 -- # false 00:20:00.038 21:34:25 -- host/digest.sh@94 -- # exp_module=software 00:20:00.038 21:34:25 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:00.038 21:34:25 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:00.038 21:34:25 -- host/digest.sh@98 -- # killprocess 2671224 00:20:00.038 21:34:25 -- common/autotest_common.sh@936 -- # '[' -z 2671224 ']' 00:20:00.038 21:34:25 -- common/autotest_common.sh@940 -- # kill -0 2671224 00:20:00.038 21:34:25 -- common/autotest_common.sh@941 -- # uname 00:20:00.038 21:34:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:00.038 21:34:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2671224 00:20:00.038 21:34:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:00.038 21:34:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:00.038 21:34:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2671224' 00:20:00.038 killing process with pid 2671224 00:20:00.038 21:34:25 -- common/autotest_common.sh@955 -- # kill 2671224 00:20:00.038 Received shutdown signal, test time was about 2.000000 seconds 00:20:00.038 00:20:00.038 Latency(us) 00:20:00.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.038 =================================================================================================================== 00:20:00.038 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:00.038 21:34:25 -- common/autotest_common.sh@960 -- # wait 2671224 00:20:00.038 21:34:25 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:00.038 21:34:25 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:00.038 21:34:25 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:00.038 21:34:25 -- host/digest.sh@80 -- # rw=randread 00:20:00.038 21:34:25 -- host/digest.sh@80 -- # bs=131072 00:20:00.038 21:34:25 -- host/digest.sh@80 -- # qd=16 00:20:00.038 21:34:25 -- host/digest.sh@80 -- # scan_dsa=false 00:20:00.038 21:34:25 -- host/digest.sh@83 -- # bperfpid=2671764 00:20:00.038 21:34:25 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:00.038 21:34:25 -- host/digest.sh@84 -- # waitforlisten 2671764 /var/tmp/bperf.sock 00:20:00.038 21:34:25 -- common/autotest_common.sh@817 -- # '[' -z 2671764 ']' 00:20:00.038 21:34:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:00.038 21:34:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:00.038 21:34:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:00.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:00.038 21:34:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:00.038 21:34:25 -- common/autotest_common.sh@10 -- # set +x 00:20:00.297 [2024-04-24 21:34:25.747426] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:20:00.297 [2024-04-24 21:34:25.747520] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671764 ] 00:20:00.297 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:00.297 Zero copy mechanism will not be used. 00:20:00.297 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.297 [2024-04-24 21:34:25.812253] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.297 [2024-04-24 21:34:25.924213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.231 21:34:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:01.231 21:34:26 -- common/autotest_common.sh@850 -- # return 0 00:20:01.231 21:34:26 -- host/digest.sh@86 -- # false 00:20:01.231 21:34:26 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:01.231 21:34:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:01.489 21:34:27 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:01.489 21:34:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:02.054 nvme0n1 00:20:02.054 21:34:27 -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:02.054 21:34:27 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:02.054 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:02.054 Zero copy mechanism will not be used. 00:20:02.054 Running I/O for 2 seconds... 00:20:03.952 00:20:03.952 Latency(us) 00:20:03.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.952 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:03.952 nvme0n1 : 2.00 2339.23 292.40 0.00 0.00 6835.64 6165.24 9854.67 00:20:03.952 =================================================================================================================== 00:20:03.952 Total : 2339.23 292.40 0.00 0.00 6835.64 6165.24 9854.67 00:20:03.952 0 00:20:03.953 21:34:29 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:03.953 21:34:29 -- host/digest.sh@93 -- # get_accel_stats 00:20:03.953 21:34:29 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:03.953 21:34:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:03.953 21:34:29 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:03.953 | select(.opcode=="crc32c") 00:20:03.953 | "\(.module_name) \(.executed)"' 00:20:04.211 21:34:29 -- host/digest.sh@94 -- # false 00:20:04.211 21:34:29 -- host/digest.sh@94 -- # exp_module=software 00:20:04.211 21:34:29 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:04.211 21:34:29 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:04.211 21:34:29 -- host/digest.sh@98 -- # killprocess 2671764 00:20:04.211 21:34:29 -- common/autotest_common.sh@936 -- # '[' -z 2671764 ']' 00:20:04.211 21:34:29 -- common/autotest_common.sh@940 -- # kill -0 2671764 00:20:04.211 21:34:29 -- common/autotest_common.sh@941 -- # uname 00:20:04.211 21:34:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:04.469 21:34:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2671764 00:20:04.469 21:34:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:04.469 21:34:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:04.469 21:34:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2671764' 00:20:04.469 killing process with pid 2671764 00:20:04.469 21:34:29 -- common/autotest_common.sh@955 -- # kill 2671764 00:20:04.469 Received shutdown signal, test time was about 2.000000 seconds 00:20:04.469 00:20:04.469 Latency(us) 00:20:04.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.469 =================================================================================================================== 00:20:04.469 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.469 21:34:29 -- common/autotest_common.sh@960 -- # wait 2671764 00:20:04.726 21:34:30 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:04.726 21:34:30 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:04.726 21:34:30 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:04.726 21:34:30 -- host/digest.sh@80 -- # rw=randwrite 00:20:04.726 21:34:30 -- host/digest.sh@80 -- # bs=4096 00:20:04.726 21:34:30 -- host/digest.sh@80 -- # qd=128 00:20:04.726 21:34:30 -- host/digest.sh@80 -- # scan_dsa=false 00:20:04.726 21:34:30 -- host/digest.sh@83 -- # bperfpid=2672296 00:20:04.726 21:34:30 -- host/digest.sh@84 -- # waitforlisten 2672296 /var/tmp/bperf.sock 00:20:04.726 21:34:30 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:04.726 21:34:30 -- common/autotest_common.sh@817 -- # '[' -z 2672296 ']' 00:20:04.726 21:34:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:04.726 21:34:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:04.726 21:34:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:04.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:04.726 21:34:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:04.726 21:34:30 -- common/autotest_common.sh@10 -- # set +x 00:20:04.726 [2024-04-24 21:34:30.230541] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:20:04.726 [2024-04-24 21:34:30.230659] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672296 ] 00:20:04.726 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.726 [2024-04-24 21:34:30.293316] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.984 [2024-04-24 21:34:30.404069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.984 21:34:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:04.984 21:34:30 -- common/autotest_common.sh@850 -- # return 0 00:20:04.984 21:34:30 -- host/digest.sh@86 -- # false 00:20:04.984 21:34:30 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:04.984 21:34:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:05.242 21:34:30 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:05.242 21:34:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:05.499 nvme0n1 00:20:05.499 21:34:31 -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:05.499 21:34:31 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:05.757 Running I/O for 2 seconds... 00:20:07.655 00:20:07.655 Latency(us) 00:20:07.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.655 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:07.655 nvme0n1 : 2.01 20615.62 80.53 0.00 0.00 6198.52 3252.53 17185.00 00:20:07.655 =================================================================================================================== 00:20:07.655 Total : 20615.62 80.53 0.00 0.00 6198.52 3252.53 17185.00 00:20:07.655 0 00:20:07.655 21:34:33 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:07.655 21:34:33 -- host/digest.sh@93 -- # get_accel_stats 00:20:07.655 21:34:33 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:07.655 21:34:33 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:07.655 | select(.opcode=="crc32c") 00:20:07.655 | "\(.module_name) \(.executed)"' 00:20:07.655 21:34:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:07.913 21:34:33 -- host/digest.sh@94 -- # false 00:20:07.913 21:34:33 -- host/digest.sh@94 -- # exp_module=software 00:20:07.913 21:34:33 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:07.913 21:34:33 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:07.913 21:34:33 -- host/digest.sh@98 -- # killprocess 2672296 00:20:07.913 21:34:33 -- common/autotest_common.sh@936 -- # '[' -z 2672296 ']' 00:20:07.913 21:34:33 -- common/autotest_common.sh@940 -- # kill -0 2672296 00:20:07.913 21:34:33 -- common/autotest_common.sh@941 -- # uname 00:20:07.913 21:34:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:07.913 21:34:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2672296 00:20:07.913 21:34:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:07.913 21:34:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:07.913 21:34:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2672296' 00:20:07.913 killing process with pid 2672296 00:20:07.913 21:34:33 -- common/autotest_common.sh@955 -- # kill 2672296 00:20:07.913 Received shutdown signal, test time was about 2.000000 seconds 00:20:07.913 00:20:07.913 Latency(us) 00:20:07.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.913 =================================================================================================================== 00:20:07.913 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:07.913 21:34:33 -- common/autotest_common.sh@960 -- # wait 2672296 00:20:08.171 21:34:33 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:08.171 21:34:33 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:08.171 21:34:33 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:08.171 21:34:33 -- host/digest.sh@80 -- # rw=randwrite 00:20:08.171 21:34:33 -- host/digest.sh@80 -- # bs=131072 00:20:08.171 21:34:33 -- host/digest.sh@80 -- # qd=16 00:20:08.171 21:34:33 -- host/digest.sh@80 -- # scan_dsa=false 00:20:08.171 21:34:33 -- host/digest.sh@83 -- # bperfpid=2672712 00:20:08.171 21:34:33 -- host/digest.sh@84 -- # waitforlisten 2672712 /var/tmp/bperf.sock 00:20:08.171 21:34:33 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:08.171 21:34:33 -- common/autotest_common.sh@817 -- # '[' -z 2672712 ']' 00:20:08.171 21:34:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:08.171 21:34:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:08.171 21:34:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:08.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:08.171 21:34:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:08.171 21:34:33 -- common/autotest_common.sh@10 -- # set +x 00:20:08.171 [2024-04-24 21:34:33.811559] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:20:08.171 [2024-04-24 21:34:33.811663] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672712 ] 00:20:08.171 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:08.171 Zero copy mechanism will not be used. 00:20:08.171 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.428 [2024-04-24 21:34:33.873067] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.428 [2024-04-24 21:34:33.986409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.428 21:34:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:08.428 21:34:34 -- common/autotest_common.sh@850 -- # return 0 00:20:08.428 21:34:34 -- host/digest.sh@86 -- # false 00:20:08.428 21:34:34 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:08.429 21:34:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:08.687 21:34:34 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:08.687 21:34:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:09.252 nvme0n1 00:20:09.252 21:34:34 -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:09.252 21:34:34 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:09.252 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:09.252 Zero copy mechanism will not be used. 00:20:09.252 Running I/O for 2 seconds... 00:20:11.170 00:20:11.170 Latency(us) 00:20:11.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.170 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:11.170 nvme0n1 : 2.01 1564.03 195.50 0.00 0.00 10198.09 6407.96 14175.19 00:20:11.170 =================================================================================================================== 00:20:11.170 Total : 1564.03 195.50 0.00 0.00 10198.09 6407.96 14175.19 00:20:11.170 0 00:20:11.170 21:34:36 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:11.170 21:34:36 -- host/digest.sh@93 -- # get_accel_stats 00:20:11.170 21:34:36 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:11.170 21:34:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:11.170 21:34:36 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:11.170 | select(.opcode=="crc32c") 00:20:11.170 | "\(.module_name) \(.executed)"' 00:20:11.428 21:34:37 -- host/digest.sh@94 -- # false 00:20:11.428 21:34:37 -- host/digest.sh@94 -- # exp_module=software 00:20:11.428 21:34:37 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:11.428 21:34:37 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:11.428 21:34:37 -- host/digest.sh@98 -- # killprocess 2672712 00:20:11.428 21:34:37 -- common/autotest_common.sh@936 -- # '[' -z 2672712 ']' 00:20:11.428 21:34:37 -- common/autotest_common.sh@940 -- # kill -0 2672712 00:20:11.428 21:34:37 -- common/autotest_common.sh@941 -- # uname 00:20:11.428 21:34:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:11.428 21:34:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2672712 00:20:11.428 21:34:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:11.428 21:34:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:11.428 21:34:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2672712' 00:20:11.428 killing process with pid 2672712 00:20:11.428 21:34:37 -- common/autotest_common.sh@955 -- # kill 2672712 00:20:11.428 Received shutdown signal, test time was about 2.000000 seconds 00:20:11.428 00:20:11.428 Latency(us) 00:20:11.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.428 =================================================================================================================== 00:20:11.428 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:11.428 21:34:37 -- common/autotest_common.sh@960 -- # wait 2672712 00:20:11.686 21:34:37 -- host/digest.sh@132 -- # killprocess 2671195 00:20:11.686 21:34:37 -- common/autotest_common.sh@936 -- # '[' -z 2671195 ']' 00:20:11.686 21:34:37 -- common/autotest_common.sh@940 -- # kill -0 2671195 00:20:11.686 21:34:37 -- common/autotest_common.sh@941 -- # uname 00:20:11.686 21:34:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:11.686 21:34:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2671195 00:20:11.686 21:34:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:11.686 21:34:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:11.686 21:34:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2671195' 00:20:11.686 killing process with pid 2671195 00:20:11.686 21:34:37 -- common/autotest_common.sh@955 -- # kill 2671195 00:20:11.686 21:34:37 -- common/autotest_common.sh@960 -- # wait 2671195 00:20:11.944 00:20:11.944 real 0m16.776s 00:20:11.944 user 0m34.092s 00:20:11.944 sys 0m3.819s 00:20:11.944 21:34:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:11.944 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:20:11.944 ************************************ 00:20:11.944 END TEST nvmf_digest_clean 00:20:11.944 ************************************ 00:20:12.202 21:34:37 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:12.202 21:34:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:12.202 21:34:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:12.202 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:20:12.202 ************************************ 00:20:12.202 START TEST nvmf_digest_error 00:20:12.202 ************************************ 00:20:12.202 21:34:37 -- common/autotest_common.sh@1111 -- # run_digest_error 00:20:12.202 21:34:37 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:12.202 21:34:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:12.202 21:34:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:12.202 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:20:12.202 21:34:37 -- nvmf/common.sh@470 -- # nvmfpid=2673271 00:20:12.202 21:34:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:12.202 21:34:37 -- nvmf/common.sh@471 -- # waitforlisten 2673271 00:20:12.202 21:34:37 -- common/autotest_common.sh@817 -- # '[' -z 2673271 ']' 00:20:12.202 21:34:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.202 21:34:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:12.202 21:34:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.202 21:34:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:12.202 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:20:12.202 [2024-04-24 21:34:37.781962] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:20:12.202 [2024-04-24 21:34:37.782048] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.202 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.202 [2024-04-24 21:34:37.844863] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.461 [2024-04-24 21:34:37.951685] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.461 [2024-04-24 21:34:37.951740] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.461 [2024-04-24 21:34:37.951766] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.461 [2024-04-24 21:34:37.951780] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.461 [2024-04-24 21:34:37.951792] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.461 [2024-04-24 21:34:37.951834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.395 21:34:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:13.395 21:34:38 -- common/autotest_common.sh@850 -- # return 0 00:20:13.395 21:34:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:13.395 21:34:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:13.395 21:34:38 -- common/autotest_common.sh@10 -- # set +x 00:20:13.395 21:34:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.395 21:34:38 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:13.395 21:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.395 21:34:38 -- common/autotest_common.sh@10 -- # set +x 00:20:13.395 [2024-04-24 21:34:38.778352] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:13.395 21:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.395 21:34:38 -- host/digest.sh@105 -- # common_target_config 00:20:13.395 21:34:38 -- host/digest.sh@43 -- # rpc_cmd 00:20:13.395 21:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.395 21:34:38 -- common/autotest_common.sh@10 -- # set +x 00:20:13.395 null0 00:20:13.395 [2024-04-24 21:34:38.898528] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:13.395 [2024-04-24 21:34:38.922758] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:13.395 21:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.395 21:34:38 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:13.395 21:34:38 -- host/digest.sh@54 -- # local rw bs qd 00:20:13.395 21:34:38 -- host/digest.sh@56 -- # rw=randread 00:20:13.396 21:34:38 -- host/digest.sh@56 -- # bs=4096 00:20:13.396 21:34:38 -- host/digest.sh@56 -- # qd=128 00:20:13.396 21:34:38 -- host/digest.sh@58 -- # bperfpid=2673420 00:20:13.396 21:34:38 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:13.396 21:34:38 -- host/digest.sh@60 -- # waitforlisten 2673420 /var/tmp/bperf.sock 00:20:13.396 21:34:38 -- common/autotest_common.sh@817 -- # '[' -z 2673420 ']' 00:20:13.396 21:34:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:13.396 21:34:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:13.396 21:34:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:13.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:13.396 21:34:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:13.396 21:34:38 -- common/autotest_common.sh@10 -- # set +x 00:20:13.396 [2024-04-24 21:34:38.969741] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:20:13.396 [2024-04-24 21:34:38.969819] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673420 ] 00:20:13.396 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.396 [2024-04-24 21:34:39.030968] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.655 [2024-04-24 21:34:39.145526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.655 21:34:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:13.655 21:34:39 -- common/autotest_common.sh@850 -- # return 0 00:20:13.655 21:34:39 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:13.655 21:34:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:13.912 21:34:39 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:13.912 21:34:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.912 21:34:39 -- common/autotest_common.sh@10 -- # set +x 00:20:13.912 21:34:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.913 21:34:39 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:13.913 21:34:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:14.170 nvme0n1 00:20:14.170 21:34:39 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:14.170 21:34:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.170 21:34:39 -- common/autotest_common.sh@10 -- # set +x 00:20:14.429 21:34:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.429 21:34:39 -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:14.429 21:34:39 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:14.429 Running I/O for 2 seconds... 00:20:14.429 [2024-04-24 21:34:39.985399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.429 [2024-04-24 21:34:39.985458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.429 [2024-04-24 21:34:39.985480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.429 [2024-04-24 21:34:40.002638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.429 [2024-04-24 21:34:40.002702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.429 [2024-04-24 21:34:40.002733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.429 [2024-04-24 21:34:40.016535] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.429 [2024-04-24 21:34:40.016582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.429 [2024-04-24 21:34:40.016614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.429 [2024-04-24 21:34:40.032806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.429 [2024-04-24 21:34:40.032849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.429 [2024-04-24 21:34:40.032878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.429 [2024-04-24 21:34:40.046982] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.429 [2024-04-24 21:34:40.047033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.429 [2024-04-24 21:34:40.047054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.429 [2024-04-24 21:34:40.063912] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.429 [2024-04-24 21:34:40.063973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.429 [2024-04-24 21:34:40.064003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.429 [2024-04-24 21:34:40.077190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.429 [2024-04-24 21:34:40.077228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.429 [2024-04-24 21:34:40.077248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.429 [2024-04-24 21:34:40.093337] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.429 [2024-04-24 21:34:40.093375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.429 [2024-04-24 21:34:40.093396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.689 [2024-04-24 21:34:40.106854] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.689 [2024-04-24 21:34:40.106887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.689 [2024-04-24 21:34:40.106904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.689 [2024-04-24 21:34:40.122669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.689 [2024-04-24 21:34:40.122724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.689 [2024-04-24 21:34:40.122744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.689 [2024-04-24 21:34:40.138330] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.689 [2024-04-24 21:34:40.138376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.689 [2024-04-24 21:34:40.138408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.689 [2024-04-24 21:34:40.152320] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.689 [2024-04-24 21:34:40.152357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.689 [2024-04-24 21:34:40.152377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.689 [2024-04-24 21:34:40.165995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.689 [2024-04-24 21:34:40.166041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.689 [2024-04-24 21:34:40.166072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.689 [2024-04-24 21:34:40.181373] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.689 [2024-04-24 21:34:40.181419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.689 [2024-04-24 21:34:40.181450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.689 [2024-04-24 21:34:40.196255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.689 [2024-04-24 21:34:40.196293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.689 [2024-04-24 21:34:40.196313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.689 [2024-04-24 21:34:40.210412] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.689 [2024-04-24 21:34:40.210457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.689 [2024-04-24 21:34:40.210490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.689 [2024-04-24 21:34:40.224990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.689 [2024-04-24 21:34:40.225037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.689 [2024-04-24 21:34:40.225068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.689 [2024-04-24 21:34:40.239915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.689 [2024-04-24 21:34:40.239975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.689 [2024-04-24 21:34:40.240019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.689 [2024-04-24 21:34:40.253850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.689 [2024-04-24 21:34:40.253895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.689 [2024-04-24 21:34:40.253927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.689 [2024-04-24 21:34:40.268456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.689 [2024-04-24 21:34:40.268502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.689 [2024-04-24 21:34:40.268534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.689 [2024-04-24 21:34:40.282188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.689 [2024-04-24 21:34:40.282233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.689 [2024-04-24 21:34:40.282265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.689 [2024-04-24 21:34:40.295659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.689 [2024-04-24 21:34:40.295717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.689 [2024-04-24 21:34:40.295743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.689 [2024-04-24 21:34:40.311877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.689 [2024-04-24 21:34:40.311924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.689 [2024-04-24 21:34:40.311941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.689 [2024-04-24 21:34:40.327326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.689 [2024-04-24 21:34:40.327373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.689 [2024-04-24 21:34:40.327404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.689 [2024-04-24 21:34:40.343331] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.689 [2024-04-24 21:34:40.343376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.689 [2024-04-24 21:34:40.343408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.689 [2024-04-24 21:34:40.356451] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.689 [2024-04-24 21:34:40.356496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.689 [2024-04-24 21:34:40.356527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.948 [2024-04-24 21:34:40.371656] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.948 [2024-04-24 21:34:40.371707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.948 [2024-04-24 21:34:40.371724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.948 [2024-04-24 21:34:40.387705] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.948 [2024-04-24 21:34:40.387761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.948 [2024-04-24 21:34:40.387789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.948 [2024-04-24 21:34:40.401574] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.948 [2024-04-24 21:34:40.401607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.948 [2024-04-24 21:34:40.401624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.948 [2024-04-24 21:34:40.415830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.948 [2024-04-24 21:34:40.415871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.948 [2024-04-24 21:34:40.415901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.948 [2024-04-24 21:34:40.431322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.948 [2024-04-24 21:34:40.431359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.948 [2024-04-24 21:34:40.431379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.948 [2024-04-24 21:34:40.447781] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.948 [2024-04-24 21:34:40.447819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.948 [2024-04-24 21:34:40.447844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.948 [2024-04-24 21:34:40.461023] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.948 [2024-04-24 21:34:40.461069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.948 [2024-04-24 21:34:40.461100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.948 [2024-04-24 21:34:40.476242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.948 [2024-04-24 21:34:40.476288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.948 [2024-04-24 21:34:40.476319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.948 [2024-04-24 21:34:40.489615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.948 [2024-04-24 21:34:40.489668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.948 [2024-04-24 21:34:40.489710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.948 [2024-04-24 21:34:40.504183] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.948 [2024-04-24 21:34:40.504220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.948 [2024-04-24 21:34:40.504239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.948 [2024-04-24 21:34:40.519943] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.948 [2024-04-24 21:34:40.519995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.948 [2024-04-24 21:34:40.520037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.948 [2024-04-24 21:34:40.534410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.948 [2024-04-24 21:34:40.534456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.948 [2024-04-24 21:34:40.534489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.948 [2024-04-24 21:34:40.549651] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.948 [2024-04-24 21:34:40.549704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.948 [2024-04-24 21:34:40.549733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.948 [2024-04-24 21:34:40.563351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.948 [2024-04-24 21:34:40.563389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.948 [2024-04-24 21:34:40.563410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.948 [2024-04-24 21:34:40.579062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.948 [2024-04-24 21:34:40.579109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.948 [2024-04-24 21:34:40.579142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.948 [2024-04-24 21:34:40.592618] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.949 [2024-04-24 21:34:40.592687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.949 [2024-04-24 21:34:40.592715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.949 [2024-04-24 21:34:40.607552] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.949 [2024-04-24 21:34:40.607590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.949 [2024-04-24 21:34:40.607610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.949 [2024-04-24 21:34:40.621901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:14.949 [2024-04-24 21:34:40.621965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.949 [2024-04-24 21:34:40.621996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.206 [2024-04-24 21:34:40.635478] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.206 [2024-04-24 21:34:40.635524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.206 [2024-04-24 21:34:40.635555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.206 [2024-04-24 21:34:40.650609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.206 [2024-04-24 21:34:40.650666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.206 [2024-04-24 21:34:40.650709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.206 [2024-04-24 21:34:40.664859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.206 [2024-04-24 21:34:40.664892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.206 [2024-04-24 21:34:40.664923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.206 [2024-04-24 21:34:40.679657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.206 [2024-04-24 21:34:40.679713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.206 [2024-04-24 21:34:40.679751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.206 [2024-04-24 21:34:40.695046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.206 [2024-04-24 21:34:40.695091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.206 [2024-04-24 21:34:40.695124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.206 [2024-04-24 21:34:40.708523] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.206 [2024-04-24 21:34:40.708568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.206 [2024-04-24 21:34:40.708599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.206 [2024-04-24 21:34:40.724475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.206 [2024-04-24 21:34:40.724520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.206 [2024-04-24 21:34:40.724554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.206 [2024-04-24 21:34:40.736871] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.206 [2024-04-24 21:34:40.736901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.206 [2024-04-24 21:34:40.736922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.206 [2024-04-24 21:34:40.752780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.206 [2024-04-24 21:34:40.752811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.206 [2024-04-24 21:34:40.752828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.206 [2024-04-24 21:34:40.766638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.206 [2024-04-24 21:34:40.766699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.206 [2024-04-24 21:34:40.766742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.206 [2024-04-24 21:34:40.782052] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.206 [2024-04-24 21:34:40.782099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.207 [2024-04-24 21:34:40.782130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.207 [2024-04-24 21:34:40.796767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.207 [2024-04-24 21:34:40.796805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.207 [2024-04-24 21:34:40.796832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.207 [2024-04-24 21:34:40.809916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.207 [2024-04-24 21:34:40.809965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.207 [2024-04-24 21:34:40.809984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.207 [2024-04-24 21:34:40.826251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.207 [2024-04-24 21:34:40.826296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.207 [2024-04-24 21:34:40.826332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.207 [2024-04-24 21:34:40.840501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.207 [2024-04-24 21:34:40.840543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.207 [2024-04-24 21:34:40.840565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.207 [2024-04-24 21:34:40.855192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.207 [2024-04-24 21:34:40.855237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.207 [2024-04-24 21:34:40.855268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.207 [2024-04-24 21:34:40.869074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.207 [2024-04-24 21:34:40.869116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.207 [2024-04-24 21:34:40.869136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.465 [2024-04-24 21:34:40.885589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.465 [2024-04-24 21:34:40.885667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.465 [2024-04-24 21:34:40.885699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.465 [2024-04-24 21:34:40.898578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.465 [2024-04-24 21:34:40.898623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.465 [2024-04-24 21:34:40.898687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.465 [2024-04-24 21:34:40.913432] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.465 [2024-04-24 21:34:40.913476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.465 [2024-04-24 21:34:40.913513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.465 [2024-04-24 21:34:40.927958] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.465 [2024-04-24 21:34:40.928003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.465 [2024-04-24 21:34:40.928035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.465 [2024-04-24 21:34:40.940528] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.465 [2024-04-24 21:34:40.940564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.465 [2024-04-24 21:34:40.940584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.465 [2024-04-24 21:34:40.955725] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.465 [2024-04-24 21:34:40.955756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.465 [2024-04-24 21:34:40.955775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.465 [2024-04-24 21:34:40.971492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.465 [2024-04-24 21:34:40.971549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.465 [2024-04-24 21:34:40.971580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.465 [2024-04-24 21:34:40.985193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.465 [2024-04-24 21:34:40.985225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.465 [2024-04-24 21:34:40.985257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.465 [2024-04-24 21:34:41.001053] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.465 [2024-04-24 21:34:41.001098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.465 [2024-04-24 21:34:41.001128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.465 [2024-04-24 21:34:41.016956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.465 [2024-04-24 21:34:41.017007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.465 [2024-04-24 21:34:41.017027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.465 [2024-04-24 21:34:41.030887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.465 [2024-04-24 21:34:41.030939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.465 [2024-04-24 21:34:41.030957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.465 [2024-04-24 21:34:41.047220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.465 [2024-04-24 21:34:41.047265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.465 [2024-04-24 21:34:41.047289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.465 [2024-04-24 21:34:41.060411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.465 [2024-04-24 21:34:41.060448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.465 [2024-04-24 21:34:41.060468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.465 [2024-04-24 21:34:41.076306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.465 [2024-04-24 21:34:41.076343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.465 [2024-04-24 21:34:41.076366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.465 [2024-04-24 21:34:41.089852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.465 [2024-04-24 21:34:41.089905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.465 [2024-04-24 21:34:41.089932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.465 [2024-04-24 21:34:41.103705] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.465 [2024-04-24 21:34:41.103736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.465 [2024-04-24 21:34:41.103756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.465 [2024-04-24 21:34:41.120131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.465 [2024-04-24 21:34:41.120168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.465 [2024-04-24 21:34:41.120194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.465 [2024-04-24 21:34:41.134578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.465 [2024-04-24 21:34:41.134615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.465 [2024-04-24 21:34:41.134643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.724 [2024-04-24 21:34:41.150407] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.724 [2024-04-24 21:34:41.150454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.724 [2024-04-24 21:34:41.150485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.724 [2024-04-24 21:34:41.165603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.724 [2024-04-24 21:34:41.165648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.724 [2024-04-24 21:34:41.165685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.724 [2024-04-24 21:34:41.181438] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.724 [2024-04-24 21:34:41.181474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.724 [2024-04-24 21:34:41.181495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.724 [2024-04-24 21:34:41.196674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.724 [2024-04-24 21:34:41.196720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.724 [2024-04-24 21:34:41.196751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.724 [2024-04-24 21:34:41.210654] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.724 [2024-04-24 21:34:41.210708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.724 [2024-04-24 21:34:41.210741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.724 [2024-04-24 21:34:41.224486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.724 [2024-04-24 21:34:41.224531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.724 [2024-04-24 21:34:41.224563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.724 [2024-04-24 21:34:41.240246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.724 [2024-04-24 21:34:41.240284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.724 [2024-04-24 21:34:41.240303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.724 [2024-04-24 21:34:41.253843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.724 [2024-04-24 21:34:41.253889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.724 [2024-04-24 21:34:41.253939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.724 [2024-04-24 21:34:41.267750] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.724 [2024-04-24 21:34:41.267788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.724 [2024-04-24 21:34:41.267814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.724 [2024-04-24 21:34:41.282693] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.724 [2024-04-24 21:34:41.282732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.724 [2024-04-24 21:34:41.282773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.724 [2024-04-24 21:34:41.297081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.724 [2024-04-24 21:34:41.297127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.724 [2024-04-24 21:34:41.297158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.724 [2024-04-24 21:34:41.311381] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.724 [2024-04-24 21:34:41.311427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.724 [2024-04-24 21:34:41.311458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.724 [2024-04-24 21:34:41.327423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.724 [2024-04-24 21:34:41.327468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.724 [2024-04-24 21:34:41.327500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.724 [2024-04-24 21:34:41.340572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.724 [2024-04-24 21:34:41.340610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.724 [2024-04-24 21:34:41.340638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.724 [2024-04-24 21:34:41.357514] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.724 [2024-04-24 21:34:41.357552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.724 [2024-04-24 21:34:41.357571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.724 [2024-04-24 21:34:41.372468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.724 [2024-04-24 21:34:41.372513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.724 [2024-04-24 21:34:41.372548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.724 [2024-04-24 21:34:41.388838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.724 [2024-04-24 21:34:41.388868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.724 [2024-04-24 21:34:41.388885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.982 [2024-04-24 21:34:41.403663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.982 [2024-04-24 21:34:41.403720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.983 [2024-04-24 21:34:41.403748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.983 [2024-04-24 21:34:41.417613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.983 [2024-04-24 21:34:41.417685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.983 [2024-04-24 21:34:41.417712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.983 [2024-04-24 21:34:41.434122] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.983 [2024-04-24 21:34:41.434168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.983 [2024-04-24 21:34:41.434200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.983 [2024-04-24 21:34:41.447778] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.983 [2024-04-24 21:34:41.447831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.983 [2024-04-24 21:34:41.447858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.983 [2024-04-24 21:34:41.462041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.983 [2024-04-24 21:34:41.462078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.983 [2024-04-24 21:34:41.462098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.983 [2024-04-24 21:34:41.480289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.983 [2024-04-24 21:34:41.480334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.983 [2024-04-24 21:34:41.480367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.983 [2024-04-24 21:34:41.494115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.983 [2024-04-24 21:34:41.494152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.983 [2024-04-24 21:34:41.494172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.983 [2024-04-24 21:34:41.511255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.983 [2024-04-24 21:34:41.511301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.983 [2024-04-24 21:34:41.511338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.983 [2024-04-24 21:34:41.525945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.983 [2024-04-24 21:34:41.525999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.983 [2024-04-24 21:34:41.526032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.983 [2024-04-24 21:34:41.540998] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.983 [2024-04-24 21:34:41.541043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.983 [2024-04-24 21:34:41.541074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.983 [2024-04-24 21:34:41.555080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.983 [2024-04-24 21:34:41.555126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.983 [2024-04-24 21:34:41.555158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.983 [2024-04-24 21:34:41.568965] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.983 [2024-04-24 21:34:41.569011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.983 [2024-04-24 21:34:41.569043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.983 [2024-04-24 21:34:41.583420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.983 [2024-04-24 21:34:41.583466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.983 [2024-04-24 21:34:41.583498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.983 [2024-04-24 21:34:41.599647] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.983 [2024-04-24 21:34:41.599706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.983 [2024-04-24 21:34:41.599733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.983 [2024-04-24 21:34:41.615553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.983 [2024-04-24 21:34:41.615599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.983 [2024-04-24 21:34:41.615642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.983 [2024-04-24 21:34:41.629576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.983 [2024-04-24 21:34:41.629621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.983 [2024-04-24 21:34:41.629688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.983 [2024-04-24 21:34:41.644711] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.983 [2024-04-24 21:34:41.644750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.983 [2024-04-24 21:34:41.644777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.983 [2024-04-24 21:34:41.658563] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:15.983 [2024-04-24 21:34:41.658601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.983 [2024-04-24 21:34:41.658621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.241 [2024-04-24 21:34:41.673640] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:16.241 [2024-04-24 21:34:41.673685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.241 [2024-04-24 21:34:41.673729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.241 [2024-04-24 21:34:41.688694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:16.241 [2024-04-24 21:34:41.688724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.241 [2024-04-24 21:34:41.688740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.241 [2024-04-24 21:34:41.703491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:16.241 [2024-04-24 21:34:41.703536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.241 [2024-04-24 21:34:41.703569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.241 [2024-04-24 21:34:41.719215] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:16.241 [2024-04-24 21:34:41.719263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.241 [2024-04-24 21:34:41.719296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.241 [2024-04-24 21:34:41.733246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:16.241 [2024-04-24 21:34:41.733292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.241 [2024-04-24 21:34:41.733325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.241 [2024-04-24 21:34:41.748129] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:16.241 [2024-04-24 21:34:41.748174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.241 [2024-04-24 21:34:41.748207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.241 [2024-04-24 21:34:41.761646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:16.241 [2024-04-24 21:34:41.761703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.241 [2024-04-24 21:34:41.761749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.241 [2024-04-24 21:34:41.777371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:16.241 [2024-04-24 21:34:41.777408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.241 [2024-04-24 21:34:41.777428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.242 [2024-04-24 21:34:41.792728] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:16.242 [2024-04-24 21:34:41.792782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.242 [2024-04-24 21:34:41.792811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.242 [2024-04-24 21:34:41.806354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:16.242 [2024-04-24 21:34:41.806391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.242 [2024-04-24 21:34:41.806412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.242 [2024-04-24 21:34:41.821781] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:16.242 [2024-04-24 21:34:41.821821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.242 [2024-04-24 21:34:41.821851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.242 [2024-04-24 21:34:41.835690] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:16.242 [2024-04-24 21:34:41.835720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.242 [2024-04-24 21:34:41.835737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.242 [2024-04-24 21:34:41.853076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:16.242 [2024-04-24 21:34:41.853123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.242 [2024-04-24 21:34:41.853155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.242 [2024-04-24 21:34:41.866943] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:16.242 [2024-04-24 21:34:41.866994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.242 [2024-04-24 21:34:41.867014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.242 [2024-04-24 21:34:41.882945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:16.242 [2024-04-24 21:34:41.883003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.242 [2024-04-24 21:34:41.883037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.242 [2024-04-24 21:34:41.896462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:16.242 [2024-04-24 21:34:41.896515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.242 [2024-04-24 21:34:41.896546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.242 [2024-04-24 21:34:41.912493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:16.242 [2024-04-24 21:34:41.912539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.242 [2024-04-24 21:34:41.912571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.500 [2024-04-24 21:34:41.924864] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:16.500 [2024-04-24 21:34:41.924896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.500 [2024-04-24 21:34:41.924927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.500 [2024-04-24 21:34:41.941233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:16.500 [2024-04-24 21:34:41.941271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.500 [2024-04-24 21:34:41.941290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.500 [2024-04-24 21:34:41.956145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11919c0) 00:20:16.500 [2024-04-24 21:34:41.956191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.500 [2024-04-24 21:34:41.956221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.500 00:20:16.500 Latency(us) 00:20:16.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.500 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:16.500 nvme0n1 : 2.05 16798.83 65.62 0.00 0.00 7487.32 4199.16 47574.28 00:20:16.500 =================================================================================================================== 00:20:16.500 Total : 16798.83 65.62 0.00 0.00 7487.32 4199.16 47574.28 00:20:16.500 0 00:20:16.500 21:34:42 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:16.500 21:34:42 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:16.500 21:34:42 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:16.500 | .driver_specific 00:20:16.500 | .nvme_error 00:20:16.500 | .status_code 00:20:16.500 | .command_transient_transport_error' 00:20:16.500 21:34:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:16.758 21:34:42 -- host/digest.sh@71 -- # (( 134 > 0 )) 00:20:16.758 21:34:42 -- host/digest.sh@73 -- # killprocess 2673420 00:20:16.758 21:34:42 -- common/autotest_common.sh@936 -- # '[' -z 2673420 ']' 00:20:16.758 21:34:42 -- common/autotest_common.sh@940 -- # kill -0 2673420 00:20:16.758 21:34:42 -- common/autotest_common.sh@941 -- # uname 00:20:16.758 21:34:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:16.758 21:34:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2673420 00:20:16.758 21:34:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:16.758 21:34:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:16.758 21:34:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2673420' 00:20:16.758 killing process with pid 2673420 00:20:16.758 21:34:42 -- common/autotest_common.sh@955 -- # kill 2673420 00:20:16.758 Received shutdown signal, test time was about 2.000000 seconds 00:20:16.758 00:20:16.758 Latency(us) 00:20:16.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.758 =================================================================================================================== 00:20:16.758 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:16.758 21:34:42 -- common/autotest_common.sh@960 -- # wait 2673420 00:20:17.016 21:34:42 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:17.016 21:34:42 -- host/digest.sh@54 -- # local rw bs qd 00:20:17.016 21:34:42 -- host/digest.sh@56 -- # rw=randread 00:20:17.016 21:34:42 -- host/digest.sh@56 -- # bs=131072 00:20:17.016 21:34:42 -- host/digest.sh@56 -- # qd=16 00:20:17.016 21:34:42 -- host/digest.sh@58 -- # bperfpid=2673830 00:20:17.017 21:34:42 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:17.017 21:34:42 -- host/digest.sh@60 -- # waitforlisten 2673830 /var/tmp/bperf.sock 00:20:17.017 21:34:42 -- common/autotest_common.sh@817 -- # '[' -z 2673830 ']' 00:20:17.017 21:34:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:17.017 21:34:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:17.017 21:34:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:17.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:17.017 21:34:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:17.017 21:34:42 -- common/autotest_common.sh@10 -- # set +x 00:20:17.017 [2024-04-24 21:34:42.606534] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:20:17.017 [2024-04-24 21:34:42.606640] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673830 ] 00:20:17.017 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:17.017 Zero copy mechanism will not be used. 00:20:17.017 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.017 [2024-04-24 21:34:42.665878] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.275 [2024-04-24 21:34:42.772711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.275 21:34:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:17.275 21:34:42 -- common/autotest_common.sh@850 -- # return 0 00:20:17.275 21:34:42 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:17.275 21:34:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:17.533 21:34:43 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:17.533 21:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.533 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:20:17.533 21:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.533 21:34:43 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:17.533 21:34:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:18.099 nvme0n1 00:20:18.099 21:34:43 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:18.099 21:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.099 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:20:18.099 21:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.099 21:34:43 -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:18.099 21:34:43 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:18.099 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:18.099 Zero copy mechanism will not be used. 00:20:18.099 Running I/O for 2 seconds... 00:20:18.099 [2024-04-24 21:34:43.746764] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.099 [2024-04-24 21:34:43.746816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.099 [2024-04-24 21:34:43.746836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.099 [2024-04-24 21:34:43.761071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.099 [2024-04-24 21:34:43.761107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.099 [2024-04-24 21:34:43.761126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.099 [2024-04-24 21:34:43.775061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.099 [2024-04-24 21:34:43.775095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.099 [2024-04-24 21:34:43.775114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.359 [2024-04-24 21:34:43.789028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.359 [2024-04-24 21:34:43.789063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.359 [2024-04-24 21:34:43.789082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.359 [2024-04-24 21:34:43.803051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.359 [2024-04-24 21:34:43.803086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.359 [2024-04-24 21:34:43.803104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.359 [2024-04-24 21:34:43.817122] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.359 [2024-04-24 21:34:43.817155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.359 [2024-04-24 21:34:43.817173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.359 [2024-04-24 21:34:43.831144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.359 [2024-04-24 21:34:43.831179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.359 [2024-04-24 21:34:43.831197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.359 [2024-04-24 21:34:43.845490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.359 [2024-04-24 21:34:43.845524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.359 [2024-04-24 21:34:43.845542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.359 [2024-04-24 21:34:43.859573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.359 [2024-04-24 21:34:43.859605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.359 [2024-04-24 21:34:43.859637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.359 [2024-04-24 21:34:43.873554] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.359 [2024-04-24 21:34:43.873588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.359 [2024-04-24 21:34:43.873607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.359 [2024-04-24 21:34:43.887568] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.359 [2024-04-24 21:34:43.887601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.359 [2024-04-24 21:34:43.887619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.359 [2024-04-24 21:34:43.901775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.359 [2024-04-24 21:34:43.901805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.359 [2024-04-24 21:34:43.901822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.359 [2024-04-24 21:34:43.915963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.359 [2024-04-24 21:34:43.916009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.359 [2024-04-24 21:34:43.916029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.359 [2024-04-24 21:34:43.930148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.359 [2024-04-24 21:34:43.930182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.359 [2024-04-24 21:34:43.930200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.359 [2024-04-24 21:34:43.944173] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.359 [2024-04-24 21:34:43.944206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.359 [2024-04-24 21:34:43.944224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.359 [2024-04-24 21:34:43.958288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.359 [2024-04-24 21:34:43.958319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.359 [2024-04-24 21:34:43.958338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.359 [2024-04-24 21:34:43.972334] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.359 [2024-04-24 21:34:43.972367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.359 [2024-04-24 21:34:43.972385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.359 [2024-04-24 21:34:43.986439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.359 [2024-04-24 21:34:43.986477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.359 [2024-04-24 21:34:43.986497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.359 [2024-04-24 21:34:44.000669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.359 [2024-04-24 21:34:44.000698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.359 [2024-04-24 21:34:44.000715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.359 [2024-04-24 21:34:44.014578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.359 [2024-04-24 21:34:44.014611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.359 [2024-04-24 21:34:44.014643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.359 [2024-04-24 21:34:44.027994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.359 [2024-04-24 21:34:44.028022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.359 [2024-04-24 21:34:44.028039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.619 [2024-04-24 21:34:44.040846] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.619 [2024-04-24 21:34:44.040878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.619 [2024-04-24 21:34:44.040894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.619 [2024-04-24 21:34:44.054195] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.619 [2024-04-24 21:34:44.054238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.619 [2024-04-24 21:34:44.054254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.619 [2024-04-24 21:34:44.067059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.619 [2024-04-24 21:34:44.067088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.619 [2024-04-24 21:34:44.067105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.619 [2024-04-24 21:34:44.079900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.619 [2024-04-24 21:34:44.079948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.619 [2024-04-24 21:34:44.079964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.619 [2024-04-24 21:34:44.093889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.619 [2024-04-24 21:34:44.093936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.619 [2024-04-24 21:34:44.093959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.619 [2024-04-24 21:34:44.108113] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.619 [2024-04-24 21:34:44.108144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.619 [2024-04-24 21:34:44.108174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.619 [2024-04-24 21:34:44.122115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.619 [2024-04-24 21:34:44.122148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.619 [2024-04-24 21:34:44.122168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.619 [2024-04-24 21:34:44.136589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.619 [2024-04-24 21:34:44.136621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.619 [2024-04-24 21:34:44.136655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.619 [2024-04-24 21:34:44.150619] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.619 [2024-04-24 21:34:44.150658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.619 [2024-04-24 21:34:44.150688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.619 [2024-04-24 21:34:44.164571] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.619 [2024-04-24 21:34:44.164604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.619 [2024-04-24 21:34:44.164622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.619 [2024-04-24 21:34:44.178775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.619 [2024-04-24 21:34:44.178804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.619 [2024-04-24 21:34:44.178821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.619 [2024-04-24 21:34:44.193018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.619 [2024-04-24 21:34:44.193051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.619 [2024-04-24 21:34:44.193075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.619 [2024-04-24 21:34:44.207081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.619 [2024-04-24 21:34:44.207113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.619 [2024-04-24 21:34:44.207132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.619 [2024-04-24 21:34:44.221134] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.619 [2024-04-24 21:34:44.221174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.619 [2024-04-24 21:34:44.221200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.619 [2024-04-24 21:34:44.235251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.619 [2024-04-24 21:34:44.235283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.619 [2024-04-24 21:34:44.235301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.619 [2024-04-24 21:34:44.249323] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.619 [2024-04-24 21:34:44.249354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.619 [2024-04-24 21:34:44.249372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.619 [2024-04-24 21:34:44.263270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.619 [2024-04-24 21:34:44.263303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.619 [2024-04-24 21:34:44.263328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.619 [2024-04-24 21:34:44.277003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.619 [2024-04-24 21:34:44.277046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.619 [2024-04-24 21:34:44.277065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.619 [2024-04-24 21:34:44.290918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.619 [2024-04-24 21:34:44.290964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.619 [2024-04-24 21:34:44.290984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.879 [2024-04-24 21:34:44.305417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.879 [2024-04-24 21:34:44.305451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.879 [2024-04-24 21:34:44.305479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.879 [2024-04-24 21:34:44.320109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.879 [2024-04-24 21:34:44.320142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.879 [2024-04-24 21:34:44.320162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.879 [2024-04-24 21:34:44.334425] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.879 [2024-04-24 21:34:44.334457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.879 [2024-04-24 21:34:44.334476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.879 [2024-04-24 21:34:44.348705] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.879 [2024-04-24 21:34:44.348733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.879 [2024-04-24 21:34:44.348749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.879 [2024-04-24 21:34:44.363030] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.879 [2024-04-24 21:34:44.363063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.879 [2024-04-24 21:34:44.363087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.879 [2024-04-24 21:34:44.377117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.879 [2024-04-24 21:34:44.377149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.879 [2024-04-24 21:34:44.377169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.879 [2024-04-24 21:34:44.391203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.879 [2024-04-24 21:34:44.391235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.879 [2024-04-24 21:34:44.391259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.879 [2024-04-24 21:34:44.405450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.879 [2024-04-24 21:34:44.405483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.879 [2024-04-24 21:34:44.405511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.879 [2024-04-24 21:34:44.419856] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.879 [2024-04-24 21:34:44.419884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.879 [2024-04-24 21:34:44.419900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.879 [2024-04-24 21:34:44.433994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.879 [2024-04-24 21:34:44.434027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.879 [2024-04-24 21:34:44.434045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.879 [2024-04-24 21:34:44.448371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.879 [2024-04-24 21:34:44.448403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.879 [2024-04-24 21:34:44.448421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.879 [2024-04-24 21:34:44.462374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.879 [2024-04-24 21:34:44.462406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.879 [2024-04-24 21:34:44.462436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.879 [2024-04-24 21:34:44.476399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.879 [2024-04-24 21:34:44.476431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.879 [2024-04-24 21:34:44.476449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.879 [2024-04-24 21:34:44.490402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.879 [2024-04-24 21:34:44.490433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.879 [2024-04-24 21:34:44.490453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.879 [2024-04-24 21:34:44.504598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.879 [2024-04-24 21:34:44.504637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.879 [2024-04-24 21:34:44.504663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.879 [2024-04-24 21:34:44.518756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.879 [2024-04-24 21:34:44.518785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.879 [2024-04-24 21:34:44.518817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.879 [2024-04-24 21:34:44.533038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.879 [2024-04-24 21:34:44.533070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.879 [2024-04-24 21:34:44.533088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.879 [2024-04-24 21:34:44.547073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:18.879 [2024-04-24 21:34:44.547105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.879 [2024-04-24 21:34:44.547123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:19.137 [2024-04-24 21:34:44.561040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.137 [2024-04-24 21:34:44.561073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.137 [2024-04-24 21:34:44.561092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:19.137 [2024-04-24 21:34:44.575085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.137 [2024-04-24 21:34:44.575116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.137 [2024-04-24 21:34:44.575135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:19.137 [2024-04-24 21:34:44.589244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.137 [2024-04-24 21:34:44.589276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.137 [2024-04-24 21:34:44.589295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.137 [2024-04-24 21:34:44.603293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.137 [2024-04-24 21:34:44.603325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.137 [2024-04-24 21:34:44.603345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:19.137 [2024-04-24 21:34:44.617182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.137 [2024-04-24 21:34:44.617214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.137 [2024-04-24 21:34:44.617233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:19.137 [2024-04-24 21:34:44.631096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.137 [2024-04-24 21:34:44.631125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.137 [2024-04-24 21:34:44.631141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:19.137 [2024-04-24 21:34:44.644507] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.137 [2024-04-24 21:34:44.644550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.137 [2024-04-24 21:34:44.644567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.137 [2024-04-24 21:34:44.658276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.137 [2024-04-24 21:34:44.658308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.137 [2024-04-24 21:34:44.658327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:19.137 [2024-04-24 21:34:44.672217] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.137 [2024-04-24 21:34:44.672250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.137 [2024-04-24 21:34:44.672268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:19.137 [2024-04-24 21:34:44.686108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.137 [2024-04-24 21:34:44.686141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.137 [2024-04-24 21:34:44.686159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:19.137 [2024-04-24 21:34:44.700532] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.137 [2024-04-24 21:34:44.700565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.137 [2024-04-24 21:34:44.700672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.137 [2024-04-24 21:34:44.714892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.137 [2024-04-24 21:34:44.714922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.137 [2024-04-24 21:34:44.714953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:19.137 [2024-04-24 21:34:44.728992] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.137 [2024-04-24 21:34:44.729026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.137 [2024-04-24 21:34:44.729044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:19.137 [2024-04-24 21:34:44.743271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.137 [2024-04-24 21:34:44.743304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.137 [2024-04-24 21:34:44.743323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:19.137 [2024-04-24 21:34:44.757198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.137 [2024-04-24 21:34:44.757232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.137 [2024-04-24 21:34:44.757251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.137 [2024-04-24 21:34:44.771198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.137 [2024-04-24 21:34:44.771231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.137 [2024-04-24 21:34:44.771250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:19.137 [2024-04-24 21:34:44.785088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.137 [2024-04-24 21:34:44.785121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.137 [2024-04-24 21:34:44.785140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:19.137 [2024-04-24 21:34:44.799322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.137 [2024-04-24 21:34:44.799354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.137 [2024-04-24 21:34:44.799373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:19.137 [2024-04-24 21:34:44.813551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.137 [2024-04-24 21:34:44.813586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.137 [2024-04-24 21:34:44.813605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.395 [2024-04-24 21:34:44.827598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.395 [2024-04-24 21:34:44.827650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.395 [2024-04-24 21:34:44.827685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:19.395 [2024-04-24 21:34:44.841582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.395 [2024-04-24 21:34:44.841616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.395 [2024-04-24 21:34:44.841644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:19.395 [2024-04-24 21:34:44.855682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.395 [2024-04-24 21:34:44.855711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.395 [2024-04-24 21:34:44.855727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:19.395 [2024-04-24 21:34:44.869680] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.395 [2024-04-24 21:34:44.869711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.395 [2024-04-24 21:34:44.869727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.395 [2024-04-24 21:34:44.884054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.395 [2024-04-24 21:34:44.884087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.395 [2024-04-24 21:34:44.884106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:19.395 [2024-04-24 21:34:44.898046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.395 [2024-04-24 21:34:44.898078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.395 [2024-04-24 21:34:44.898096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:19.395 [2024-04-24 21:34:44.912059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.395 [2024-04-24 21:34:44.912091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.395 [2024-04-24 21:34:44.912109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:19.395 [2024-04-24 21:34:44.926072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.395 [2024-04-24 21:34:44.926105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.395 [2024-04-24 21:34:44.926123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.395 [2024-04-24 21:34:44.940030] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.396 [2024-04-24 21:34:44.940062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.396 [2024-04-24 21:34:44.940081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:19.396 [2024-04-24 21:34:44.954064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.396 [2024-04-24 21:34:44.954097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.396 [2024-04-24 21:34:44.954115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:19.396 [2024-04-24 21:34:44.968078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.396 [2024-04-24 21:34:44.968111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.396 [2024-04-24 21:34:44.968130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:19.396 [2024-04-24 21:34:44.982087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.396 [2024-04-24 21:34:44.982120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.396 [2024-04-24 21:34:44.982139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.396 [2024-04-24 21:34:44.996020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.396 [2024-04-24 21:34:44.996054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.396 [2024-04-24 21:34:44.996072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:19.396 [2024-04-24 21:34:45.010113] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.396 [2024-04-24 21:34:45.010147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.396 [2024-04-24 21:34:45.010165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:19.396 [2024-04-24 21:34:45.023947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.396 [2024-04-24 21:34:45.023981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.396 [2024-04-24 21:34:45.023999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:19.396 [2024-04-24 21:34:45.037938] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.396 [2024-04-24 21:34:45.037982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.396 [2024-04-24 21:34:45.037999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.396 [2024-04-24 21:34:45.051954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.396 [2024-04-24 21:34:45.051997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.396 [2024-04-24 21:34:45.052013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:19.396 [2024-04-24 21:34:45.066039] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.396 [2024-04-24 21:34:45.066071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.396 [2024-04-24 21:34:45.066096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:19.655 [2024-04-24 21:34:45.080118] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.655 [2024-04-24 21:34:45.080153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.655 [2024-04-24 21:34:45.080172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:19.655 [2024-04-24 21:34:45.094115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.655 [2024-04-24 21:34:45.094148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.655 [2024-04-24 21:34:45.094166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.655 [2024-04-24 21:34:45.108098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.655 [2024-04-24 21:34:45.108131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.655 [2024-04-24 21:34:45.108150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:19.655 [2024-04-24 21:34:45.122176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.655 [2024-04-24 21:34:45.122208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.655 [2024-04-24 21:34:45.122226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:19.655 [2024-04-24 21:34:45.136202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.655 [2024-04-24 21:34:45.136234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.655 [2024-04-24 21:34:45.136252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:19.655 [2024-04-24 21:34:45.150187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.655 [2024-04-24 21:34:45.150219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.655 [2024-04-24 21:34:45.150238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.655 [2024-04-24 21:34:45.164177] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.655 [2024-04-24 21:34:45.164210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.655 [2024-04-24 21:34:45.164229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:19.655 [2024-04-24 21:34:45.178182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.655 [2024-04-24 21:34:45.178215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.655 [2024-04-24 21:34:45.178233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:19.655 [2024-04-24 21:34:45.192141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.655 [2024-04-24 21:34:45.192174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.655 [2024-04-24 21:34:45.192193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:19.655 [2024-04-24 21:34:45.206060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.655 [2024-04-24 21:34:45.206093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.655 [2024-04-24 21:34:45.206111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.655 [2024-04-24 21:34:45.220085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.655 [2024-04-24 21:34:45.220119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.655 [2024-04-24 21:34:45.220138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:19.655 [2024-04-24 21:34:45.234082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.655 [2024-04-24 21:34:45.234117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.655 [2024-04-24 21:34:45.234136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:19.655 [2024-04-24 21:34:45.248054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.655 [2024-04-24 21:34:45.248087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.655 [2024-04-24 21:34:45.248105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:19.655 [2024-04-24 21:34:45.262019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.655 [2024-04-24 21:34:45.262052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.655 [2024-04-24 21:34:45.262070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.655 [2024-04-24 21:34:45.276007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.655 [2024-04-24 21:34:45.276039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.655 [2024-04-24 21:34:45.276057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:19.655 [2024-04-24 21:34:45.290076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.655 [2024-04-24 21:34:45.290108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.655 [2024-04-24 21:34:45.290127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:19.655 [2024-04-24 21:34:45.304071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.655 [2024-04-24 21:34:45.304103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.655 [2024-04-24 21:34:45.304128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:19.655 [2024-04-24 21:34:45.318101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.655 [2024-04-24 21:34:45.318134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.655 [2024-04-24 21:34:45.318152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.914 [2024-04-24 21:34:45.332093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.914 [2024-04-24 21:34:45.332126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.914 [2024-04-24 21:34:45.332145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:19.914 [2024-04-24 21:34:45.346052] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.914 [2024-04-24 21:34:45.346084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.914 [2024-04-24 21:34:45.346102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:19.914 [2024-04-24 21:34:45.360033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.914 [2024-04-24 21:34:45.360066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.914 [2024-04-24 21:34:45.360085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:19.914 [2024-04-24 21:34:45.374081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.914 [2024-04-24 21:34:45.374114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.914 [2024-04-24 21:34:45.374132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.914 [2024-04-24 21:34:45.388080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.914 [2024-04-24 21:34:45.388114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.914 [2024-04-24 21:34:45.388132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:19.914 [2024-04-24 21:34:45.402099] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.914 [2024-04-24 21:34:45.402132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.914 [2024-04-24 21:34:45.402149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:19.914 [2024-04-24 21:34:45.416058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.914 [2024-04-24 21:34:45.416091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.914 [2024-04-24 21:34:45.416109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:19.915 [2024-04-24 21:34:45.429895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.915 [2024-04-24 21:34:45.429947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.915 [2024-04-24 21:34:45.429966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.915 [2024-04-24 21:34:45.443973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.915 [2024-04-24 21:34:45.444005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.915 [2024-04-24 21:34:45.444024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:19.915 [2024-04-24 21:34:45.458026] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.915 [2024-04-24 21:34:45.458069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.915 [2024-04-24 21:34:45.458088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:19.915 [2024-04-24 21:34:45.472043] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.915 [2024-04-24 21:34:45.472076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.915 [2024-04-24 21:34:45.472094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:19.915 [2024-04-24 21:34:45.486050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.915 [2024-04-24 21:34:45.486082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.915 [2024-04-24 21:34:45.486100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.915 [2024-04-24 21:34:45.500012] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.915 [2024-04-24 21:34:45.500044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.915 [2024-04-24 21:34:45.500062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:19.915 [2024-04-24 21:34:45.514222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.915 [2024-04-24 21:34:45.514254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.915 [2024-04-24 21:34:45.514272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:19.915 [2024-04-24 21:34:45.528185] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.915 [2024-04-24 21:34:45.528217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.915 [2024-04-24 21:34:45.528235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:19.915 [2024-04-24 21:34:45.542000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.915 [2024-04-24 21:34:45.542043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.915 [2024-04-24 21:34:45.542060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.915 [2024-04-24 21:34:45.556263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.915 [2024-04-24 21:34:45.556296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.915 [2024-04-24 21:34:45.556315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:19.915 [2024-04-24 21:34:45.570174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.915 [2024-04-24 21:34:45.570207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.915 [2024-04-24 21:34:45.570226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:19.915 [2024-04-24 21:34:45.584154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:19.915 [2024-04-24 21:34:45.584185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.915 [2024-04-24 21:34:45.584204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.174 [2024-04-24 21:34:45.598313] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:20.174 [2024-04-24 21:34:45.598348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.174 [2024-04-24 21:34:45.598367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:20.174 [2024-04-24 21:34:45.612487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:20.174 [2024-04-24 21:34:45.612520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.174 [2024-04-24 21:34:45.612539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:20.174 [2024-04-24 21:34:45.626587] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:20.174 [2024-04-24 21:34:45.626620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.174 [2024-04-24 21:34:45.626648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:20.174 [2024-04-24 21:34:45.640774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:20.174 [2024-04-24 21:34:45.640802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.174 [2024-04-24 21:34:45.640820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.174 [2024-04-24 21:34:45.654763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:20.174 [2024-04-24 21:34:45.654802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.174 [2024-04-24 21:34:45.654818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:20.174 [2024-04-24 21:34:45.668940] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:20.174 [2024-04-24 21:34:45.668981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.174 [2024-04-24 21:34:45.669002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:20.174 [2024-04-24 21:34:45.683080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:20.174 [2024-04-24 21:34:45.683113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.174 [2024-04-24 21:34:45.683131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:20.174 [2024-04-24 21:34:45.697001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:20.174 [2024-04-24 21:34:45.697034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.174 [2024-04-24 21:34:45.697053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.174 [2024-04-24 21:34:45.710979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:20.174 [2024-04-24 21:34:45.711012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.174 [2024-04-24 21:34:45.711030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:20.174 [2024-04-24 21:34:45.725040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:20.174 [2024-04-24 21:34:45.725072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.174 [2024-04-24 21:34:45.725090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:20.174 [2024-04-24 21:34:45.739144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c394d0) 00:20:20.174 [2024-04-24 21:34:45.739176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.174 [2024-04-24 21:34:45.739195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:20.174 00:20:20.174 Latency(us) 00:20:20.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.174 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:20.174 nvme0n1 : 2.01 2208.91 276.11 0.00 0.00 7236.73 6310.87 14757.74 00:20:20.174 =================================================================================================================== 00:20:20.174 Total : 2208.91 276.11 0.00 0.00 7236.73 6310.87 14757.74 00:20:20.174 0 00:20:20.174 21:34:45 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:20.174 21:34:45 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:20.174 21:34:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:20.174 21:34:45 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:20.174 | .driver_specific 00:20:20.174 | .nvme_error 00:20:20.174 | .status_code 00:20:20.174 | .command_transient_transport_error' 00:20:20.432 21:34:45 -- host/digest.sh@71 -- # (( 143 > 0 )) 00:20:20.432 21:34:45 -- host/digest.sh@73 -- # killprocess 2673830 00:20:20.432 21:34:45 -- common/autotest_common.sh@936 -- # '[' -z 2673830 ']' 00:20:20.432 21:34:45 -- common/autotest_common.sh@940 -- # kill -0 2673830 00:20:20.432 21:34:45 -- common/autotest_common.sh@941 -- # uname 00:20:20.432 21:34:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:20.432 21:34:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2673830 00:20:20.432 21:34:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:20.432 21:34:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:20.432 21:34:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2673830' 00:20:20.432 killing process with pid 2673830 00:20:20.432 21:34:46 -- common/autotest_common.sh@955 -- # kill 2673830 00:20:20.432 Received shutdown signal, test time was about 2.000000 seconds 00:20:20.432 00:20:20.432 Latency(us) 00:20:20.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.432 =================================================================================================================== 00:20:20.433 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:20.433 21:34:46 -- common/autotest_common.sh@960 -- # wait 2673830 00:20:20.691 21:34:46 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:20.691 21:34:46 -- host/digest.sh@54 -- # local rw bs qd 00:20:20.691 21:34:46 -- host/digest.sh@56 -- # rw=randwrite 00:20:20.691 21:34:46 -- host/digest.sh@56 -- # bs=4096 00:20:20.691 21:34:46 -- host/digest.sh@56 -- # qd=128 00:20:20.691 21:34:46 -- host/digest.sh@58 -- # bperfpid=2674240 00:20:20.691 21:34:46 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:20.691 21:34:46 -- host/digest.sh@60 -- # waitforlisten 2674240 /var/tmp/bperf.sock 00:20:20.691 21:34:46 -- common/autotest_common.sh@817 -- # '[' -z 2674240 ']' 00:20:20.691 21:34:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:20.691 21:34:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:20.691 21:34:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:20.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:20.691 21:34:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:20.691 21:34:46 -- common/autotest_common.sh@10 -- # set +x 00:20:20.691 [2024-04-24 21:34:46.313388] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:20:20.691 [2024-04-24 21:34:46.313483] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2674240 ] 00:20:20.691 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.949 [2024-04-24 21:34:46.378729] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.949 [2024-04-24 21:34:46.487401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.949 21:34:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:20.949 21:34:46 -- common/autotest_common.sh@850 -- # return 0 00:20:20.949 21:34:46 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:20.949 21:34:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:21.207 21:34:46 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:21.207 21:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.207 21:34:46 -- common/autotest_common.sh@10 -- # set +x 00:20:21.207 21:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.207 21:34:46 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:21.207 21:34:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:21.781 nvme0n1 00:20:21.781 21:34:47 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:21.781 21:34:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.781 21:34:47 -- common/autotest_common.sh@10 -- # set +x 00:20:21.781 21:34:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.781 21:34:47 -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:21.781 21:34:47 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:21.781 Running I/O for 2 seconds... 00:20:21.781 [2024-04-24 21:34:47.302870] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:21.781 [2024-04-24 21:34:47.303244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.781 [2024-04-24 21:34:47.303286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:21.781 [2024-04-24 21:34:47.317511] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:21.781 [2024-04-24 21:34:47.317922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.781 [2024-04-24 21:34:47.317969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:21.781 [2024-04-24 21:34:47.332089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:21.781 [2024-04-24 21:34:47.332385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.781 [2024-04-24 21:34:47.332419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:21.781 [2024-04-24 21:34:47.346741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:21.781 [2024-04-24 21:34:47.347085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.781 [2024-04-24 21:34:47.347124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:21.781 [2024-04-24 21:34:47.361160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:21.781 [2024-04-24 21:34:47.361501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.781 [2024-04-24 21:34:47.361534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:21.781 [2024-04-24 21:34:47.374198] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:21.781 [2024-04-24 21:34:47.374537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.781 [2024-04-24 21:34:47.374568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:21.781 [2024-04-24 21:34:47.388323] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:21.781 [2024-04-24 21:34:47.388654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.781 [2024-04-24 21:34:47.388709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:21.781 [2024-04-24 21:34:47.402858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:21.781 [2024-04-24 21:34:47.403201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.781 [2024-04-24 21:34:47.403234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:21.781 [2024-04-24 21:34:47.417312] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:21.781 [2024-04-24 21:34:47.417636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.781 [2024-04-24 21:34:47.417685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:21.781 [2024-04-24 21:34:47.431780] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:21.781 [2024-04-24 21:34:47.432110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.781 [2024-04-24 21:34:47.432142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:21.781 [2024-04-24 21:34:47.445178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:21.781 [2024-04-24 21:34:47.445438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.781 [2024-04-24 21:34:47.445466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.040 [2024-04-24 21:34:47.458364] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.040 [2024-04-24 21:34:47.458667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.040 [2024-04-24 21:34:47.458697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.040 [2024-04-24 21:34:47.471583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.040 [2024-04-24 21:34:47.471868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.040 [2024-04-24 21:34:47.471898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.040 [2024-04-24 21:34:47.484703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.040 [2024-04-24 21:34:47.484975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.040 [2024-04-24 21:34:47.485006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.040 [2024-04-24 21:34:47.497560] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.040 [2024-04-24 21:34:47.497835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.040 [2024-04-24 21:34:47.497864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.040 [2024-04-24 21:34:47.510601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.040 [2024-04-24 21:34:47.511103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.040 [2024-04-24 21:34:47.511133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.040 [2024-04-24 21:34:47.523964] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.040 [2024-04-24 21:34:47.524227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.040 [2024-04-24 21:34:47.524256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.040 [2024-04-24 21:34:47.537066] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.040 [2024-04-24 21:34:47.537325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.040 [2024-04-24 21:34:47.537354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.040 [2024-04-24 21:34:47.550207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.040 [2024-04-24 21:34:47.550468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.040 [2024-04-24 21:34:47.550496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.040 [2024-04-24 21:34:47.563426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.040 [2024-04-24 21:34:47.563703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.040 [2024-04-24 21:34:47.563732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.040 [2024-04-24 21:34:47.576339] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.040 [2024-04-24 21:34:47.576611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.040 [2024-04-24 21:34:47.576651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.040 [2024-04-24 21:34:47.589328] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.040 [2024-04-24 21:34:47.589708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.040 [2024-04-24 21:34:47.589737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.040 [2024-04-24 21:34:47.602345] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.040 [2024-04-24 21:34:47.602620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.040 [2024-04-24 21:34:47.602660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.040 [2024-04-24 21:34:47.615416] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.040 [2024-04-24 21:34:47.615693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.040 [2024-04-24 21:34:47.615723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.040 [2024-04-24 21:34:47.628560] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.040 [2024-04-24 21:34:47.628955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.040 [2024-04-24 21:34:47.628986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.040 [2024-04-24 21:34:47.641670] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.040 [2024-04-24 21:34:47.642101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.040 [2024-04-24 21:34:47.642136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.040 [2024-04-24 21:34:47.655058] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.040 [2024-04-24 21:34:47.655416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.040 [2024-04-24 21:34:47.655445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.040 [2024-04-24 21:34:47.668270] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.040 [2024-04-24 21:34:47.668529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.040 [2024-04-24 21:34:47.668559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.040 [2024-04-24 21:34:47.681499] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.040 [2024-04-24 21:34:47.681876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.040 [2024-04-24 21:34:47.681905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.040 [2024-04-24 21:34:47.694753] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.040 [2024-04-24 21:34:47.695064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.040 [2024-04-24 21:34:47.695093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.040 [2024-04-24 21:34:47.708153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.040 [2024-04-24 21:34:47.708437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.040 [2024-04-24 21:34:47.708466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.299 [2024-04-24 21:34:47.721412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.299 [2024-04-24 21:34:47.721696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.299 [2024-04-24 21:34:47.721726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.299 [2024-04-24 21:34:47.734566] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.299 [2024-04-24 21:34:47.734949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.299 [2024-04-24 21:34:47.734994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.299 [2024-04-24 21:34:47.747752] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.300 [2024-04-24 21:34:47.748035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.300 [2024-04-24 21:34:47.748063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.300 [2024-04-24 21:34:47.760795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.300 [2024-04-24 21:34:47.761066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.300 [2024-04-24 21:34:47.761094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.300 [2024-04-24 21:34:47.773901] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.300 [2024-04-24 21:34:47.774160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.300 [2024-04-24 21:34:47.774189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.300 [2024-04-24 21:34:47.786893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.300 [2024-04-24 21:34:47.787193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.300 [2024-04-24 21:34:47.787221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.300 [2024-04-24 21:34:47.800026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.300 [2024-04-24 21:34:47.800387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.300 [2024-04-24 21:34:47.800415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.300 [2024-04-24 21:34:47.813549] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.300 [2024-04-24 21:34:47.813864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.300 [2024-04-24 21:34:47.813893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.300 [2024-04-24 21:34:47.826869] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.300 [2024-04-24 21:34:47.827156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.300 [2024-04-24 21:34:47.827188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.300 [2024-04-24 21:34:47.840101] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.300 [2024-04-24 21:34:47.840377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.300 [2024-04-24 21:34:47.840407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.300 [2024-04-24 21:34:47.853303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.300 [2024-04-24 21:34:47.853648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.300 [2024-04-24 21:34:47.853688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.300 [2024-04-24 21:34:47.866468] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.300 [2024-04-24 21:34:47.866742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.300 [2024-04-24 21:34:47.866771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.300 [2024-04-24 21:34:47.879712] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.300 [2024-04-24 21:34:47.880044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.300 [2024-04-24 21:34:47.880073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.300 [2024-04-24 21:34:47.892844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.300 [2024-04-24 21:34:47.893150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.300 [2024-04-24 21:34:47.893179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.300 [2024-04-24 21:34:47.905859] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.300 [2024-04-24 21:34:47.906142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.300 [2024-04-24 21:34:47.906170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.300 [2024-04-24 21:34:47.918772] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.300 [2024-04-24 21:34:47.919051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.300 [2024-04-24 21:34:47.919080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.300 [2024-04-24 21:34:47.931982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.300 [2024-04-24 21:34:47.932264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.300 [2024-04-24 21:34:47.932294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.300 [2024-04-24 21:34:47.945064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.300 [2024-04-24 21:34:47.945327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.300 [2024-04-24 21:34:47.945355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.300 [2024-04-24 21:34:47.958205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.300 [2024-04-24 21:34:47.958466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.300 [2024-04-24 21:34:47.958494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.300 [2024-04-24 21:34:47.971323] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.300 [2024-04-24 21:34:47.971590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.300 [2024-04-24 21:34:47.971623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.559 [2024-04-24 21:34:47.984646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.559 [2024-04-24 21:34:47.985032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.559 [2024-04-24 21:34:47.985061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.559 [2024-04-24 21:34:47.997666] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.559 [2024-04-24 21:34:47.997928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.559 [2024-04-24 21:34:47.997956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.559 [2024-04-24 21:34:48.010612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.559 [2024-04-24 21:34:48.011019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.559 [2024-04-24 21:34:48.011047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.559 [2024-04-24 21:34:48.023782] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.559 [2024-04-24 21:34:48.024172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.559 [2024-04-24 21:34:48.024200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.559 [2024-04-24 21:34:48.036674] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.559 [2024-04-24 21:34:48.036980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.559 [2024-04-24 21:34:48.037009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.559 [2024-04-24 21:34:48.049658] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.559 [2024-04-24 21:34:48.049947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.559 [2024-04-24 21:34:48.049976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.559 [2024-04-24 21:34:48.062768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.559 [2024-04-24 21:34:48.063053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.559 [2024-04-24 21:34:48.063082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.559 [2024-04-24 21:34:48.075846] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.559 [2024-04-24 21:34:48.076205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.559 [2024-04-24 21:34:48.076233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.559 [2024-04-24 21:34:48.088922] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.559 [2024-04-24 21:34:48.089203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.559 [2024-04-24 21:34:48.089233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.559 [2024-04-24 21:34:48.102093] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.559 [2024-04-24 21:34:48.102371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.559 [2024-04-24 21:34:48.102407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.559 [2024-04-24 21:34:48.115050] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.559 [2024-04-24 21:34:48.115308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.559 [2024-04-24 21:34:48.115336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.559 [2024-04-24 21:34:48.127876] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.559 [2024-04-24 21:34:48.128140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.559 [2024-04-24 21:34:48.128169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.559 [2024-04-24 21:34:48.140925] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.559 [2024-04-24 21:34:48.141187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.559 [2024-04-24 21:34:48.141216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.559 [2024-04-24 21:34:48.153979] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.559 [2024-04-24 21:34:48.154266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.559 [2024-04-24 21:34:48.154295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.559 [2024-04-24 21:34:48.167003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.559 [2024-04-24 21:34:48.167265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.559 [2024-04-24 21:34:48.167295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.559 [2024-04-24 21:34:48.180096] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.559 [2024-04-24 21:34:48.180356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.559 [2024-04-24 21:34:48.180386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.559 [2024-04-24 21:34:48.192977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.559 [2024-04-24 21:34:48.193237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.559 [2024-04-24 21:34:48.193266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.559 [2024-04-24 21:34:48.206092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.559 [2024-04-24 21:34:48.206355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.559 [2024-04-24 21:34:48.206384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.559 [2024-04-24 21:34:48.220330] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.559 [2024-04-24 21:34:48.220690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.559 [2024-04-24 21:34:48.220719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.559 [2024-04-24 21:34:48.234749] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.560 [2024-04-24 21:34:48.235067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.560 [2024-04-24 21:34:48.235099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.818 [2024-04-24 21:34:48.249068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.818 [2024-04-24 21:34:48.249397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.818 [2024-04-24 21:34:48.249430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.818 [2024-04-24 21:34:48.263370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.818 [2024-04-24 21:34:48.263706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.818 [2024-04-24 21:34:48.263735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.818 [2024-04-24 21:34:48.277618] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.818 [2024-04-24 21:34:48.278028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.818 [2024-04-24 21:34:48.278060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.818 [2024-04-24 21:34:48.291892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.818 [2024-04-24 21:34:48.292216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.818 [2024-04-24 21:34:48.292250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.818 [2024-04-24 21:34:48.306178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.818 [2024-04-24 21:34:48.306504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.818 [2024-04-24 21:34:48.306535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.818 [2024-04-24 21:34:48.320532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.818 [2024-04-24 21:34:48.320913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.818 [2024-04-24 21:34:48.320942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.818 [2024-04-24 21:34:48.334946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.818 [2024-04-24 21:34:48.335284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.818 [2024-04-24 21:34:48.335317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.818 [2024-04-24 21:34:48.349139] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.818 [2024-04-24 21:34:48.349464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.818 [2024-04-24 21:34:48.349497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.818 [2024-04-24 21:34:48.363355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.818 [2024-04-24 21:34:48.363682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.818 [2024-04-24 21:34:48.363715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.818 [2024-04-24 21:34:48.377649] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.818 [2024-04-24 21:34:48.378027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.818 [2024-04-24 21:34:48.378059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.818 [2024-04-24 21:34:48.391877] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.818 [2024-04-24 21:34:48.392211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.818 [2024-04-24 21:34:48.392243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.818 [2024-04-24 21:34:48.406159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.818 [2024-04-24 21:34:48.406490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.818 [2024-04-24 21:34:48.406523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.818 [2024-04-24 21:34:48.420374] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.818 [2024-04-24 21:34:48.420673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.818 [2024-04-24 21:34:48.420719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.818 [2024-04-24 21:34:48.434551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.818 [2024-04-24 21:34:48.434867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.818 [2024-04-24 21:34:48.434896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.818 [2024-04-24 21:34:48.448777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.818 [2024-04-24 21:34:48.449102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.818 [2024-04-24 21:34:48.449134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.818 [2024-04-24 21:34:48.462996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.818 [2024-04-24 21:34:48.463295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.818 [2024-04-24 21:34:48.463334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.818 [2024-04-24 21:34:48.477460] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.818 [2024-04-24 21:34:48.477797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.818 [2024-04-24 21:34:48.477826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:22.819 [2024-04-24 21:34:48.491698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:22.819 [2024-04-24 21:34:48.492021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.819 [2024-04-24 21:34:48.492054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.077 [2024-04-24 21:34:48.506003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.077 [2024-04-24 21:34:48.506302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.077 [2024-04-24 21:34:48.506336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.077 [2024-04-24 21:34:48.520251] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.077 [2024-04-24 21:34:48.520547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.077 [2024-04-24 21:34:48.520580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.077 [2024-04-24 21:34:48.534585] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.077 [2024-04-24 21:34:48.534988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.077 [2024-04-24 21:34:48.535021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.077 [2024-04-24 21:34:48.548826] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.077 [2024-04-24 21:34:48.549156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.077 [2024-04-24 21:34:48.549189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.077 [2024-04-24 21:34:48.563079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.077 [2024-04-24 21:34:48.563408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.077 [2024-04-24 21:34:48.563440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.077 [2024-04-24 21:34:48.577399] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.077 [2024-04-24 21:34:48.577727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.077 [2024-04-24 21:34:48.577760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.077 [2024-04-24 21:34:48.591625] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.077 [2024-04-24 21:34:48.591962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.077 [2024-04-24 21:34:48.592001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.077 [2024-04-24 21:34:48.605796] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.077 [2024-04-24 21:34:48.606137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.077 [2024-04-24 21:34:48.606170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.077 [2024-04-24 21:34:48.620017] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.077 [2024-04-24 21:34:48.620310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.077 [2024-04-24 21:34:48.620341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.077 [2024-04-24 21:34:48.634213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.077 [2024-04-24 21:34:48.634544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.077 [2024-04-24 21:34:48.634575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.077 [2024-04-24 21:34:48.648415] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.078 [2024-04-24 21:34:48.648762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.078 [2024-04-24 21:34:48.648792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.078 [2024-04-24 21:34:48.662695] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.078 [2024-04-24 21:34:48.662985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.078 [2024-04-24 21:34:48.663016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.078 [2024-04-24 21:34:48.676998] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.078 [2024-04-24 21:34:48.677298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.078 [2024-04-24 21:34:48.677329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.078 [2024-04-24 21:34:48.691197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.078 [2024-04-24 21:34:48.691525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.078 [2024-04-24 21:34:48.691557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.078 [2024-04-24 21:34:48.705438] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.078 [2024-04-24 21:34:48.705769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.078 [2024-04-24 21:34:48.705798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.078 [2024-04-24 21:34:48.719746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.078 [2024-04-24 21:34:48.720133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.078 [2024-04-24 21:34:48.720166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.078 [2024-04-24 21:34:48.734148] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.078 [2024-04-24 21:34:48.734440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.078 [2024-04-24 21:34:48.734471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.078 [2024-04-24 21:34:48.748487] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.078 [2024-04-24 21:34:48.748791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.078 [2024-04-24 21:34:48.748819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.336 [2024-04-24 21:34:48.762822] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.336 [2024-04-24 21:34:48.763122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.336 [2024-04-24 21:34:48.763155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.336 [2024-04-24 21:34:48.777114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.336 [2024-04-24 21:34:48.777438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.336 [2024-04-24 21:34:48.777471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.336 [2024-04-24 21:34:48.791522] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.336 [2024-04-24 21:34:48.791883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.336 [2024-04-24 21:34:48.791928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.336 [2024-04-24 21:34:48.805756] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.336 [2024-04-24 21:34:48.806058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.336 [2024-04-24 21:34:48.806091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.336 [2024-04-24 21:34:48.820268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.336 [2024-04-24 21:34:48.820564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.336 [2024-04-24 21:34:48.820596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.336 [2024-04-24 21:34:48.834535] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.336 [2024-04-24 21:34:48.834921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.336 [2024-04-24 21:34:48.834950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.336 [2024-04-24 21:34:48.848852] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.336 [2024-04-24 21:34:48.849179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.336 [2024-04-24 21:34:48.849211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.336 [2024-04-24 21:34:48.863112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.336 [2024-04-24 21:34:48.863444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.336 [2024-04-24 21:34:48.863476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.336 [2024-04-24 21:34:48.877470] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.336 [2024-04-24 21:34:48.877811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.336 [2024-04-24 21:34:48.877839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.336 [2024-04-24 21:34:48.891797] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.336 [2024-04-24 21:34:48.892095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.336 [2024-04-24 21:34:48.892127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.336 [2024-04-24 21:34:48.906104] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.336 [2024-04-24 21:34:48.906402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.336 [2024-04-24 21:34:48.906434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.336 [2024-04-24 21:34:48.920437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.336 [2024-04-24 21:34:48.920781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.336 [2024-04-24 21:34:48.920810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.336 [2024-04-24 21:34:48.934747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.336 [2024-04-24 21:34:48.935055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.336 [2024-04-24 21:34:48.935087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.336 [2024-04-24 21:34:48.949022] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.336 [2024-04-24 21:34:48.949351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.336 [2024-04-24 21:34:48.949382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.336 [2024-04-24 21:34:48.963234] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.336 [2024-04-24 21:34:48.963559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.336 [2024-04-24 21:34:48.963597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.336 [2024-04-24 21:34:48.977572] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.336 [2024-04-24 21:34:48.977937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.336 [2024-04-24 21:34:48.977967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.336 [2024-04-24 21:34:48.991884] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.336 [2024-04-24 21:34:48.992227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.336 [2024-04-24 21:34:48.992260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.336 [2024-04-24 21:34:49.006110] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.336 [2024-04-24 21:34:49.006439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.336 [2024-04-24 21:34:49.006473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.595 [2024-04-24 21:34:49.020353] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.595 [2024-04-24 21:34:49.020691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.595 [2024-04-24 21:34:49.020720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.595 [2024-04-24 21:34:49.034530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.595 [2024-04-24 21:34:49.034917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.595 [2024-04-24 21:34:49.034961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.595 [2024-04-24 21:34:49.048786] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.595 [2024-04-24 21:34:49.049125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.595 [2024-04-24 21:34:49.049158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.595 [2024-04-24 21:34:49.063063] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.595 [2024-04-24 21:34:49.063362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.595 [2024-04-24 21:34:49.063394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.595 [2024-04-24 21:34:49.077416] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.595 [2024-04-24 21:34:49.077723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.595 [2024-04-24 21:34:49.077752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.595 [2024-04-24 21:34:49.091723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.595 [2024-04-24 21:34:49.092072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.595 [2024-04-24 21:34:49.092105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.595 [2024-04-24 21:34:49.105959] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.595 [2024-04-24 21:34:49.106289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.595 [2024-04-24 21:34:49.106320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.595 [2024-04-24 21:34:49.120118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.595 [2024-04-24 21:34:49.120445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.596 [2024-04-24 21:34:49.120478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.596 [2024-04-24 21:34:49.134351] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.596 [2024-04-24 21:34:49.134693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.596 [2024-04-24 21:34:49.134721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.596 [2024-04-24 21:34:49.148602] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.596 [2024-04-24 21:34:49.148980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.596 [2024-04-24 21:34:49.149013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.596 [2024-04-24 21:34:49.162948] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.596 [2024-04-24 21:34:49.163276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.596 [2024-04-24 21:34:49.163310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.596 [2024-04-24 21:34:49.177160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.596 [2024-04-24 21:34:49.177491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.596 [2024-04-24 21:34:49.177522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.596 [2024-04-24 21:34:49.191482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.596 [2024-04-24 21:34:49.191815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.596 [2024-04-24 21:34:49.191844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.596 [2024-04-24 21:34:49.205679] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.596 [2024-04-24 21:34:49.206029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.596 [2024-04-24 21:34:49.206062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.596 [2024-04-24 21:34:49.219933] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.596 [2024-04-24 21:34:49.220317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.596 [2024-04-24 21:34:49.220344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.596 [2024-04-24 21:34:49.233763] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.596 [2024-04-24 21:34:49.234037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.596 [2024-04-24 21:34:49.234065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.596 [2024-04-24 21:34:49.246922] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.596 [2024-04-24 21:34:49.247298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.596 [2024-04-24 21:34:49.247326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.596 [2024-04-24 21:34:49.260106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.596 [2024-04-24 21:34:49.260490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.596 [2024-04-24 21:34:49.260533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.854 [2024-04-24 21:34:49.273353] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.854 [2024-04-24 21:34:49.273749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.854 [2024-04-24 21:34:49.273778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.854 [2024-04-24 21:34:49.286687] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732830) with pdu=0x2000190fd640 00:20:23.854 [2024-04-24 21:34:49.287033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:23.854 [2024-04-24 21:34:49.287075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:23.854 00:20:23.854 Latency(us) 00:20:23.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.854 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:23.854 nvme0n1 : 2.01 18533.53 72.40 0.00 0.00 6890.63 2961.26 14757.74 00:20:23.854 =================================================================================================================== 00:20:23.854 Total : 18533.53 72.40 0.00 0.00 6890.63 2961.26 14757.74 00:20:23.854 0 00:20:23.854 21:34:49 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:23.854 21:34:49 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:23.854 21:34:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:23.854 21:34:49 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:23.854 | .driver_specific 00:20:23.854 | .nvme_error 00:20:23.854 | .status_code 00:20:23.854 | .command_transient_transport_error' 00:20:24.112 21:34:49 -- host/digest.sh@71 -- # (( 145 > 0 )) 00:20:24.112 21:34:49 -- host/digest.sh@73 -- # killprocess 2674240 00:20:24.112 21:34:49 -- common/autotest_common.sh@936 -- # '[' -z 2674240 ']' 00:20:24.112 21:34:49 -- common/autotest_common.sh@940 -- # kill -0 2674240 00:20:24.112 21:34:49 -- common/autotest_common.sh@941 -- # uname 00:20:24.112 21:34:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:24.112 21:34:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2674240 00:20:24.112 21:34:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:24.112 21:34:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:24.112 21:34:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2674240' 00:20:24.112 killing process with pid 2674240 00:20:24.112 21:34:49 -- common/autotest_common.sh@955 -- # kill 2674240 00:20:24.112 Received shutdown signal, test time was about 2.000000 seconds 00:20:24.112 00:20:24.112 Latency(us) 00:20:24.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.112 =================================================================================================================== 00:20:24.112 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:24.112 21:34:49 -- common/autotest_common.sh@960 -- # wait 2674240 00:20:24.370 21:34:49 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:24.370 21:34:49 -- host/digest.sh@54 -- # local rw bs qd 00:20:24.370 21:34:49 -- host/digest.sh@56 -- # rw=randwrite 00:20:24.370 21:34:49 -- host/digest.sh@56 -- # bs=131072 00:20:24.370 21:34:49 -- host/digest.sh@56 -- # qd=16 00:20:24.370 21:34:49 -- host/digest.sh@58 -- # bperfpid=2674655 00:20:24.370 21:34:49 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:24.370 21:34:49 -- host/digest.sh@60 -- # waitforlisten 2674655 /var/tmp/bperf.sock 00:20:24.370 21:34:49 -- common/autotest_common.sh@817 -- # '[' -z 2674655 ']' 00:20:24.370 21:34:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:24.370 21:34:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:24.370 21:34:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:24.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:24.370 21:34:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:24.370 21:34:49 -- common/autotest_common.sh@10 -- # set +x 00:20:24.370 [2024-04-24 21:34:49.898663] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:20:24.370 [2024-04-24 21:34:49.898744] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2674655 ] 00:20:24.370 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:24.370 Zero copy mechanism will not be used. 00:20:24.370 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.370 [2024-04-24 21:34:49.958269] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.629 [2024-04-24 21:34:50.078418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.629 21:34:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:24.629 21:34:50 -- common/autotest_common.sh@850 -- # return 0 00:20:24.629 21:34:50 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:24.629 21:34:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:24.887 21:34:50 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:24.887 21:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.887 21:34:50 -- common/autotest_common.sh@10 -- # set +x 00:20:24.887 21:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.887 21:34:50 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:24.887 21:34:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:25.457 nvme0n1 00:20:25.457 21:34:50 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:25.457 21:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.457 21:34:50 -- common/autotest_common.sh@10 -- # set +x 00:20:25.457 21:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.457 21:34:50 -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:25.457 21:34:50 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:25.457 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:25.457 Zero copy mechanism will not be used. 00:20:25.457 Running I/O for 2 seconds... 00:20:25.457 [2024-04-24 21:34:51.025917] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.457 [2024-04-24 21:34:51.026526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.457 [2024-04-24 21:34:51.026582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.457 [2024-04-24 21:34:51.047605] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.457 [2024-04-24 21:34:51.048081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.457 [2024-04-24 21:34:51.048115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.457 [2024-04-24 21:34:51.071568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.457 [2024-04-24 21:34:51.071979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.457 [2024-04-24 21:34:51.072027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.457 [2024-04-24 21:34:51.095727] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.457 [2024-04-24 21:34:51.096233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.457 [2024-04-24 21:34:51.096280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.457 [2024-04-24 21:34:51.117127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.457 [2024-04-24 21:34:51.117520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.457 [2024-04-24 21:34:51.117567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.715 [2024-04-24 21:34:51.140789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.715 [2024-04-24 21:34:51.141302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.715 [2024-04-24 21:34:51.141331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.715 [2024-04-24 21:34:51.161462] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.715 [2024-04-24 21:34:51.161984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.715 [2024-04-24 21:34:51.162029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.715 [2024-04-24 21:34:51.182876] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.715 [2024-04-24 21:34:51.183408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.716 [2024-04-24 21:34:51.183454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.716 [2024-04-24 21:34:51.200892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.716 [2024-04-24 21:34:51.201172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.716 [2024-04-24 21:34:51.201216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.716 [2024-04-24 21:34:51.220345] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.716 [2024-04-24 21:34:51.220954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.716 [2024-04-24 21:34:51.220999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.716 [2024-04-24 21:34:51.240656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.716 [2024-04-24 21:34:51.241218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.716 [2024-04-24 21:34:51.241245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.716 [2024-04-24 21:34:51.262154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.716 [2024-04-24 21:34:51.262594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.716 [2024-04-24 21:34:51.262622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.716 [2024-04-24 21:34:51.281722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.716 [2024-04-24 21:34:51.282159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.716 [2024-04-24 21:34:51.282186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.716 [2024-04-24 21:34:51.301960] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.716 [2024-04-24 21:34:51.302491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.716 [2024-04-24 21:34:51.302537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.716 [2024-04-24 21:34:51.323123] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.716 [2024-04-24 21:34:51.323684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.716 [2024-04-24 21:34:51.323714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.716 [2024-04-24 21:34:51.343811] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.716 [2024-04-24 21:34:51.344260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.716 [2024-04-24 21:34:51.344287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.716 [2024-04-24 21:34:51.365394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.716 [2024-04-24 21:34:51.365870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.716 [2024-04-24 21:34:51.365899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.716 [2024-04-24 21:34:51.387875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.716 [2024-04-24 21:34:51.388389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.716 [2024-04-24 21:34:51.388418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.974 [2024-04-24 21:34:51.409122] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.974 [2024-04-24 21:34:51.409553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.974 [2024-04-24 21:34:51.409581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.974 [2024-04-24 21:34:51.428326] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.974 [2024-04-24 21:34:51.428877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.974 [2024-04-24 21:34:51.428905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.974 [2024-04-24 21:34:51.451158] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.974 [2024-04-24 21:34:51.451726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.974 [2024-04-24 21:34:51.451771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.974 [2024-04-24 21:34:51.472913] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.974 [2024-04-24 21:34:51.473334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.974 [2024-04-24 21:34:51.473362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.974 [2024-04-24 21:34:51.496078] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.974 [2024-04-24 21:34:51.496667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.974 [2024-04-24 21:34:51.496715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.974 [2024-04-24 21:34:51.518285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.974 [2024-04-24 21:34:51.518705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.974 [2024-04-24 21:34:51.518734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.974 [2024-04-24 21:34:51.540405] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.974 [2024-04-24 21:34:51.540957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.974 [2024-04-24 21:34:51.541009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.974 [2024-04-24 21:34:51.562799] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.975 [2024-04-24 21:34:51.563286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.975 [2024-04-24 21:34:51.563329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.975 [2024-04-24 21:34:51.585259] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.975 [2024-04-24 21:34:51.585912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.975 [2024-04-24 21:34:51.585941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.975 [2024-04-24 21:34:51.606842] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.975 [2024-04-24 21:34:51.607285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.975 [2024-04-24 21:34:51.607313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.975 [2024-04-24 21:34:51.625770] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.975 [2024-04-24 21:34:51.626278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.975 [2024-04-24 21:34:51.626324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.975 [2024-04-24 21:34:51.647430] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:25.975 [2024-04-24 21:34:51.647967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.975 [2024-04-24 21:34:51.647996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.233 [2024-04-24 21:34:51.669912] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.233 [2024-04-24 21:34:51.670501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.233 [2024-04-24 21:34:51.670546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.233 [2024-04-24 21:34:51.692369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.233 [2024-04-24 21:34:51.692823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.233 [2024-04-24 21:34:51.692866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.233 [2024-04-24 21:34:51.715891] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.233 [2024-04-24 21:34:51.716365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.233 [2024-04-24 21:34:51.716410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.233 [2024-04-24 21:34:51.734819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.233 [2024-04-24 21:34:51.735242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.233 [2024-04-24 21:34:51.735270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.233 [2024-04-24 21:34:51.756169] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.233 [2024-04-24 21:34:51.756685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.233 [2024-04-24 21:34:51.756727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.233 [2024-04-24 21:34:51.778177] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.233 [2024-04-24 21:34:51.778639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.233 [2024-04-24 21:34:51.778667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.233 [2024-04-24 21:34:51.800255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.233 [2024-04-24 21:34:51.800848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.233 [2024-04-24 21:34:51.800891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.233 [2024-04-24 21:34:51.821754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.233 [2024-04-24 21:34:51.822337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.233 [2024-04-24 21:34:51.822364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.233 [2024-04-24 21:34:51.841981] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.233 [2024-04-24 21:34:51.842438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.233 [2024-04-24 21:34:51.842466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.233 [2024-04-24 21:34:51.860597] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.233 [2024-04-24 21:34:51.861054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.233 [2024-04-24 21:34:51.861081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.233 [2024-04-24 21:34:51.881183] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.233 [2024-04-24 21:34:51.881782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.233 [2024-04-24 21:34:51.881810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.233 [2024-04-24 21:34:51.903672] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.233 [2024-04-24 21:34:51.904159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.233 [2024-04-24 21:34:51.904204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.491 [2024-04-24 21:34:51.925740] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.492 [2024-04-24 21:34:51.926405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.492 [2024-04-24 21:34:51.926433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.492 [2024-04-24 21:34:51.944342] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.492 [2024-04-24 21:34:51.944744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.492 [2024-04-24 21:34:51.944774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.492 [2024-04-24 21:34:51.961593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.492 [2024-04-24 21:34:51.962013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.492 [2024-04-24 21:34:51.962056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.492 [2024-04-24 21:34:51.981756] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.492 [2024-04-24 21:34:51.982269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.492 [2024-04-24 21:34:51.982314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.492 [2024-04-24 21:34:52.006999] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.492 [2024-04-24 21:34:52.007442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.492 [2024-04-24 21:34:52.007486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.492 [2024-04-24 21:34:52.028221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.492 [2024-04-24 21:34:52.028720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.492 [2024-04-24 21:34:52.028762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.492 [2024-04-24 21:34:52.050021] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.492 [2024-04-24 21:34:52.050595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.492 [2024-04-24 21:34:52.050650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.492 [2024-04-24 21:34:52.072369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.492 [2024-04-24 21:34:52.073064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.492 [2024-04-24 21:34:52.073091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.492 [2024-04-24 21:34:52.094466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.492 [2024-04-24 21:34:52.094880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.492 [2024-04-24 21:34:52.094929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.492 [2024-04-24 21:34:52.115895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.492 [2024-04-24 21:34:52.116432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.492 [2024-04-24 21:34:52.116477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.492 [2024-04-24 21:34:52.137147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.492 [2024-04-24 21:34:52.137546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.492 [2024-04-24 21:34:52.137575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.492 [2024-04-24 21:34:52.159861] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.492 [2024-04-24 21:34:52.160370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.492 [2024-04-24 21:34:52.160415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.750 [2024-04-24 21:34:52.180597] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.750 [2024-04-24 21:34:52.181048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.750 [2024-04-24 21:34:52.181090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.750 [2024-04-24 21:34:52.199875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.750 [2024-04-24 21:34:52.200279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.750 [2024-04-24 21:34:52.200323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.750 [2024-04-24 21:34:52.220594] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.750 [2024-04-24 21:34:52.221192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.750 [2024-04-24 21:34:52.221237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.750 [2024-04-24 21:34:52.241279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.750 [2024-04-24 21:34:52.241798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.750 [2024-04-24 21:34:52.241840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.750 [2024-04-24 21:34:52.260225] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.750 [2024-04-24 21:34:52.260687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.750 [2024-04-24 21:34:52.260730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.750 [2024-04-24 21:34:52.280716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.750 [2024-04-24 21:34:52.281080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.750 [2024-04-24 21:34:52.281107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.750 [2024-04-24 21:34:52.302542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.750 [2024-04-24 21:34:52.302994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.750 [2024-04-24 21:34:52.303022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.751 [2024-04-24 21:34:52.325761] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.751 [2024-04-24 21:34:52.326247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.751 [2024-04-24 21:34:52.326273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.751 [2024-04-24 21:34:52.348025] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.751 [2024-04-24 21:34:52.348536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.751 [2024-04-24 21:34:52.348562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.751 [2024-04-24 21:34:52.371413] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.751 [2024-04-24 21:34:52.371957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.751 [2024-04-24 21:34:52.372003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.751 [2024-04-24 21:34:52.394412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.751 [2024-04-24 21:34:52.394953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.751 [2024-04-24 21:34:52.395000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.751 [2024-04-24 21:34:52.417846] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:26.751 [2024-04-24 21:34:52.418359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.751 [2024-04-24 21:34:52.418402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:27.009 [2024-04-24 21:34:52.439901] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.009 [2024-04-24 21:34:52.440487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.009 [2024-04-24 21:34:52.440515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:27.009 [2024-04-24 21:34:52.460132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.009 [2024-04-24 21:34:52.460698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.009 [2024-04-24 21:34:52.460727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:27.009 [2024-04-24 21:34:52.481708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.009 [2024-04-24 21:34:52.482114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.009 [2024-04-24 21:34:52.482156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:27.009 [2024-04-24 21:34:52.502889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.009 [2024-04-24 21:34:52.503391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.009 [2024-04-24 21:34:52.503437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:27.009 [2024-04-24 21:34:52.526089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.009 [2024-04-24 21:34:52.526714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.009 [2024-04-24 21:34:52.526741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:27.009 [2024-04-24 21:34:52.548766] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.009 [2024-04-24 21:34:52.549279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.009 [2024-04-24 21:34:52.549324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:27.009 [2024-04-24 21:34:52.571403] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.009 [2024-04-24 21:34:52.571868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.009 [2024-04-24 21:34:52.571897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:27.009 [2024-04-24 21:34:52.593693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.009 [2024-04-24 21:34:52.594099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.009 [2024-04-24 21:34:52.594140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:27.009 [2024-04-24 21:34:52.616350] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.009 [2024-04-24 21:34:52.616829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.009 [2024-04-24 21:34:52.616872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:27.009 [2024-04-24 21:34:52.639299] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.009 [2024-04-24 21:34:52.639828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.009 [2024-04-24 21:34:52.639870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:27.009 [2024-04-24 21:34:52.662301] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.009 [2024-04-24 21:34:52.662821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.009 [2024-04-24 21:34:52.662859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:27.009 [2024-04-24 21:34:52.685313] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.009 [2024-04-24 21:34:52.685930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.009 [2024-04-24 21:34:52.685959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:27.268 [2024-04-24 21:34:52.706886] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.268 [2024-04-24 21:34:52.707426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.268 [2024-04-24 21:34:52.707470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:27.268 [2024-04-24 21:34:52.726206] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.268 [2024-04-24 21:34:52.726720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.268 [2024-04-24 21:34:52.726749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:27.268 [2024-04-24 21:34:52.751035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.268 [2024-04-24 21:34:52.751588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.268 [2024-04-24 21:34:52.751639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:27.268 [2024-04-24 21:34:52.774040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.268 [2024-04-24 21:34:52.774666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.268 [2024-04-24 21:34:52.774710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:27.268 [2024-04-24 21:34:52.794814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.268 [2024-04-24 21:34:52.795275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.268 [2024-04-24 21:34:52.795303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:27.268 [2024-04-24 21:34:52.815189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.268 [2024-04-24 21:34:52.815692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.268 [2024-04-24 21:34:52.815721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:27.268 [2024-04-24 21:34:52.836779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.268 [2024-04-24 21:34:52.837290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.268 [2024-04-24 21:34:52.837334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:27.268 [2024-04-24 21:34:52.858015] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.268 [2024-04-24 21:34:52.858475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.268 [2024-04-24 21:34:52.858504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:27.268 [2024-04-24 21:34:52.880657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.268 [2024-04-24 21:34:52.881122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.268 [2024-04-24 21:34:52.881149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:27.268 [2024-04-24 21:34:52.902048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.268 [2024-04-24 21:34:52.902643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.268 [2024-04-24 21:34:52.902672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:27.268 [2024-04-24 21:34:52.923640] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.268 [2024-04-24 21:34:52.924265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.268 [2024-04-24 21:34:52.924293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:27.526 [2024-04-24 21:34:52.947775] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.526 [2024-04-24 21:34:52.948475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.526 [2024-04-24 21:34:52.948518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:27.526 [2024-04-24 21:34:52.969073] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.526 [2024-04-24 21:34:52.969439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.526 [2024-04-24 21:34:52.969481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:27.526 [2024-04-24 21:34:52.989899] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1732be0) with pdu=0x2000190fef90 00:20:27.526 [2024-04-24 21:34:52.990558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.526 [2024-04-24 21:34:52.990586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:27.526 00:20:27.526 Latency(us) 00:20:27.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.526 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:27.526 nvme0n1 : 2.01 1435.02 179.38 0.00 0.00 11120.31 7912.87 24175.50 00:20:27.526 =================================================================================================================== 00:20:27.526 Total : 1435.02 179.38 0.00 0.00 11120.31 7912.87 24175.50 00:20:27.526 0 00:20:27.526 21:34:53 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:27.526 21:34:53 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:27.526 21:34:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:27.526 21:34:53 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:27.526 | .driver_specific 00:20:27.526 | .nvme_error 00:20:27.526 | .status_code 00:20:27.526 | .command_transient_transport_error' 00:20:27.784 21:34:53 -- host/digest.sh@71 -- # (( 92 > 0 )) 00:20:27.784 21:34:53 -- host/digest.sh@73 -- # killprocess 2674655 00:20:27.784 21:34:53 -- common/autotest_common.sh@936 -- # '[' -z 2674655 ']' 00:20:27.784 21:34:53 -- common/autotest_common.sh@940 -- # kill -0 2674655 00:20:27.784 21:34:53 -- common/autotest_common.sh@941 -- # uname 00:20:27.784 21:34:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:27.784 21:34:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2674655 00:20:27.784 21:34:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:27.784 21:34:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:27.784 21:34:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2674655' 00:20:27.784 killing process with pid 2674655 00:20:27.784 21:34:53 -- common/autotest_common.sh@955 -- # kill 2674655 00:20:27.784 Received shutdown signal, test time was about 2.000000 seconds 00:20:27.784 00:20:27.784 Latency(us) 00:20:27.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.784 =================================================================================================================== 00:20:27.784 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.784 21:34:53 -- common/autotest_common.sh@960 -- # wait 2674655 00:20:28.042 21:34:53 -- host/digest.sh@116 -- # killprocess 2673271 00:20:28.042 21:34:53 -- common/autotest_common.sh@936 -- # '[' -z 2673271 ']' 00:20:28.042 21:34:53 -- common/autotest_common.sh@940 -- # kill -0 2673271 00:20:28.042 21:34:53 -- common/autotest_common.sh@941 -- # uname 00:20:28.042 21:34:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:28.042 21:34:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2673271 00:20:28.042 21:34:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:28.042 21:34:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:28.042 21:34:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2673271' 00:20:28.042 killing process with pid 2673271 00:20:28.042 21:34:53 -- common/autotest_common.sh@955 -- # kill 2673271 00:20:28.042 21:34:53 -- common/autotest_common.sh@960 -- # wait 2673271 00:20:28.301 00:20:28.301 real 0m16.122s 00:20:28.301 user 0m31.831s 00:20:28.301 sys 0m3.875s 00:20:28.301 21:34:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:28.301 21:34:53 -- common/autotest_common.sh@10 -- # set +x 00:20:28.301 ************************************ 00:20:28.301 END TEST nvmf_digest_error 00:20:28.301 ************************************ 00:20:28.301 21:34:53 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:28.301 21:34:53 -- host/digest.sh@150 -- # nvmftestfini 00:20:28.301 21:34:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:28.301 21:34:53 -- nvmf/common.sh@117 -- # sync 00:20:28.301 21:34:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:28.301 21:34:53 -- nvmf/common.sh@120 -- # set +e 00:20:28.301 21:34:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:28.301 21:34:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:28.301 rmmod nvme_tcp 00:20:28.301 rmmod nvme_fabrics 00:20:28.301 rmmod nvme_keyring 00:20:28.301 21:34:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:28.301 21:34:53 -- nvmf/common.sh@124 -- # set -e 00:20:28.301 21:34:53 -- nvmf/common.sh@125 -- # return 0 00:20:28.301 21:34:53 -- nvmf/common.sh@478 -- # '[' -n 2673271 ']' 00:20:28.301 21:34:53 -- nvmf/common.sh@479 -- # killprocess 2673271 00:20:28.301 21:34:53 -- common/autotest_common.sh@936 -- # '[' -z 2673271 ']' 00:20:28.301 21:34:53 -- common/autotest_common.sh@940 -- # kill -0 2673271 00:20:28.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2673271) - No such process 00:20:28.301 21:34:53 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2673271 is not found' 00:20:28.301 Process with pid 2673271 is not found 00:20:28.301 21:34:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:28.301 21:34:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:28.301 21:34:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:28.301 21:34:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:28.301 21:34:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:28.301 21:34:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.301 21:34:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:28.301 21:34:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.838 21:34:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:30.838 00:20:30.838 real 0m37.465s 00:20:30.838 user 1m6.827s 00:20:30.838 sys 0m9.318s 00:20:30.838 21:34:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:30.838 21:34:55 -- common/autotest_common.sh@10 -- # set +x 00:20:30.838 ************************************ 00:20:30.838 END TEST nvmf_digest 00:20:30.838 ************************************ 00:20:30.838 21:34:56 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:20:30.838 21:34:56 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:20:30.838 21:34:56 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:20:30.838 21:34:56 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:20:30.838 21:34:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:30.838 21:34:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:30.838 21:34:56 -- common/autotest_common.sh@10 -- # set +x 00:20:30.838 ************************************ 00:20:30.838 START TEST nvmf_bdevperf 00:20:30.838 ************************************ 00:20:30.838 21:34:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:20:30.838 * Looking for test storage... 00:20:30.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:30.838 21:34:56 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.838 21:34:56 -- nvmf/common.sh@7 -- # uname -s 00:20:30.838 21:34:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.838 21:34:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.838 21:34:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.838 21:34:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.838 21:34:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.838 21:34:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.838 21:34:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.838 21:34:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.838 21:34:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.838 21:34:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.838 21:34:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.838 21:34:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.838 21:34:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.838 21:34:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.838 21:34:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.838 21:34:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.838 21:34:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.838 21:34:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.838 21:34:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.838 21:34:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.839 21:34:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.839 21:34:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.839 21:34:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.839 21:34:56 -- paths/export.sh@5 -- # export PATH 00:20:30.839 21:34:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.839 21:34:56 -- nvmf/common.sh@47 -- # : 0 00:20:30.839 21:34:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:30.839 21:34:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:30.839 21:34:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.839 21:34:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.839 21:34:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.839 21:34:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:30.839 21:34:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:30.839 21:34:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:30.839 21:34:56 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:30.839 21:34:56 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:30.839 21:34:56 -- host/bdevperf.sh@24 -- # nvmftestinit 00:20:30.839 21:34:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:30.839 21:34:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.839 21:34:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:30.839 21:34:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:30.839 21:34:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:30.839 21:34:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.839 21:34:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:30.839 21:34:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.839 21:34:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:30.839 21:34:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:30.839 21:34:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:30.839 21:34:56 -- common/autotest_common.sh@10 -- # set +x 00:20:32.739 21:34:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:32.739 21:34:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:32.739 21:34:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:32.739 21:34:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:32.739 21:34:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:32.739 21:34:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:32.739 21:34:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:32.739 21:34:58 -- nvmf/common.sh@295 -- # net_devs=() 00:20:32.739 21:34:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:32.739 21:34:58 -- nvmf/common.sh@296 -- # e810=() 00:20:32.739 21:34:58 -- nvmf/common.sh@296 -- # local -ga e810 00:20:32.739 21:34:58 -- nvmf/common.sh@297 -- # x722=() 00:20:32.739 21:34:58 -- nvmf/common.sh@297 -- # local -ga x722 00:20:32.739 21:34:58 -- nvmf/common.sh@298 -- # mlx=() 00:20:32.739 21:34:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:32.739 21:34:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:32.739 21:34:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:32.739 21:34:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:32.739 21:34:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:32.739 21:34:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:32.739 21:34:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:32.739 21:34:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:32.739 21:34:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:32.739 21:34:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:32.739 21:34:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:32.739 21:34:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:32.739 21:34:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:32.739 21:34:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:32.739 21:34:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:32.739 21:34:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:32.739 21:34:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:32.739 21:34:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:32.739 21:34:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:32.739 21:34:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:32.739 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:32.739 21:34:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:32.739 21:34:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:32.739 21:34:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.739 21:34:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.739 21:34:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:32.739 21:34:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:32.739 21:34:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:32.739 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:32.739 21:34:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:32.739 21:34:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:32.739 21:34:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.739 21:34:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.739 21:34:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:32.739 21:34:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:32.739 21:34:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:32.739 21:34:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:32.739 21:34:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:32.739 21:34:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.739 21:34:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:32.739 21:34:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.739 21:34:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:32.739 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:32.739 21:34:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.739 21:34:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:32.739 21:34:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.739 21:34:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:32.739 21:34:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.739 21:34:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:32.739 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:32.739 21:34:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.739 21:34:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:32.739 21:34:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:32.739 21:34:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:32.739 21:34:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:32.739 21:34:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:32.739 21:34:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:32.739 21:34:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:32.739 21:34:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:32.739 21:34:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:32.739 21:34:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:32.739 21:34:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:32.739 21:34:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:32.739 21:34:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:32.739 21:34:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.739 21:34:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:32.739 21:34:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:32.739 21:34:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:32.739 21:34:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:32.739 21:34:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:32.739 21:34:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:32.739 21:34:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:32.739 21:34:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:32.739 21:34:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:32.739 21:34:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:32.739 21:34:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:32.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:32.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:20:32.739 00:20:32.739 --- 10.0.0.2 ping statistics --- 00:20:32.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.739 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:20:32.739 21:34:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:32.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:32.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:20:32.739 00:20:32.739 --- 10.0.0.1 ping statistics --- 00:20:32.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.739 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:20:32.739 21:34:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:32.739 21:34:58 -- nvmf/common.sh@411 -- # return 0 00:20:32.739 21:34:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:32.739 21:34:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:32.739 21:34:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:32.739 21:34:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:32.739 21:34:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:32.739 21:34:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:32.739 21:34:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:32.739 21:34:58 -- host/bdevperf.sh@25 -- # tgt_init 00:20:32.739 21:34:58 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:20:32.739 21:34:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:32.739 21:34:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:32.739 21:34:58 -- common/autotest_common.sh@10 -- # set +x 00:20:32.739 21:34:58 -- nvmf/common.sh@470 -- # nvmfpid=2677129 00:20:32.740 21:34:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:32.740 21:34:58 -- nvmf/common.sh@471 -- # waitforlisten 2677129 00:20:32.740 21:34:58 -- common/autotest_common.sh@817 -- # '[' -z 2677129 ']' 00:20:32.740 21:34:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.740 21:34:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:32.740 21:34:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.740 21:34:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:32.740 21:34:58 -- common/autotest_common.sh@10 -- # set +x 00:20:32.740 [2024-04-24 21:34:58.413143] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:20:32.740 [2024-04-24 21:34:58.413212] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.998 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.998 [2024-04-24 21:34:58.483209] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:32.998 [2024-04-24 21:34:58.584614] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.998 [2024-04-24 21:34:58.584674] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.998 [2024-04-24 21:34:58.584689] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:32.998 [2024-04-24 21:34:58.584700] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:32.998 [2024-04-24 21:34:58.584710] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.998 [2024-04-24 21:34:58.585020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.998 [2024-04-24 21:34:58.585044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:32.998 [2024-04-24 21:34:58.585048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.257 21:34:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:33.257 21:34:58 -- common/autotest_common.sh@850 -- # return 0 00:20:33.257 21:34:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:33.257 21:34:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:33.257 21:34:58 -- common/autotest_common.sh@10 -- # set +x 00:20:33.257 21:34:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.257 21:34:58 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:33.257 21:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.257 21:34:58 -- common/autotest_common.sh@10 -- # set +x 00:20:33.257 [2024-04-24 21:34:58.712748] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.257 21:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.257 21:34:58 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:33.257 21:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.257 21:34:58 -- common/autotest_common.sh@10 -- # set +x 00:20:33.257 Malloc0 00:20:33.257 21:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.257 21:34:58 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:33.257 21:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.257 21:34:58 -- common/autotest_common.sh@10 -- # set +x 00:20:33.257 21:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.257 21:34:58 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:33.257 21:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.257 21:34:58 -- common/autotest_common.sh@10 -- # set +x 00:20:33.257 21:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.257 21:34:58 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:33.257 21:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.257 21:34:58 -- common/autotest_common.sh@10 -- # set +x 00:20:33.257 [2024-04-24 21:34:58.777288] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.257 21:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.257 21:34:58 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:20:33.257 21:34:58 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:20:33.257 21:34:58 -- nvmf/common.sh@521 -- # config=() 00:20:33.257 21:34:58 -- nvmf/common.sh@521 -- # local subsystem config 00:20:33.257 21:34:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:33.257 21:34:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:33.257 { 00:20:33.257 "params": { 00:20:33.257 "name": "Nvme$subsystem", 00:20:33.257 "trtype": "$TEST_TRANSPORT", 00:20:33.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.257 "adrfam": "ipv4", 00:20:33.257 "trsvcid": "$NVMF_PORT", 00:20:33.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.257 "hdgst": ${hdgst:-false}, 00:20:33.257 "ddgst": ${ddgst:-false} 00:20:33.257 }, 00:20:33.257 "method": "bdev_nvme_attach_controller" 00:20:33.257 } 00:20:33.257 EOF 00:20:33.257 )") 00:20:33.257 21:34:58 -- nvmf/common.sh@543 -- # cat 00:20:33.257 21:34:58 -- nvmf/common.sh@545 -- # jq . 00:20:33.257 21:34:58 -- nvmf/common.sh@546 -- # IFS=, 00:20:33.257 21:34:58 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:33.257 "params": { 00:20:33.257 "name": "Nvme1", 00:20:33.257 "trtype": "tcp", 00:20:33.257 "traddr": "10.0.0.2", 00:20:33.257 "adrfam": "ipv4", 00:20:33.257 "trsvcid": "4420", 00:20:33.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.257 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:33.257 "hdgst": false, 00:20:33.257 "ddgst": false 00:20:33.257 }, 00:20:33.257 "method": "bdev_nvme_attach_controller" 00:20:33.257 }' 00:20:33.257 [2024-04-24 21:34:58.821756] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:20:33.257 [2024-04-24 21:34:58.821834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2677158 ] 00:20:33.257 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.257 [2024-04-24 21:34:58.881358] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.516 [2024-04-24 21:34:58.992830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.516 Running I/O for 1 seconds... 00:20:34.899 00:20:34.899 Latency(us) 00:20:34.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.899 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:34.899 Verification LBA range: start 0x0 length 0x4000 00:20:34.899 Nvme1n1 : 1.01 8636.60 33.74 0.00 0.00 14761.07 983.04 16796.63 00:20:34.899 =================================================================================================================== 00:20:34.899 Total : 8636.60 33.74 0.00 0.00 14761.07 983.04 16796.63 00:20:34.899 21:35:00 -- host/bdevperf.sh@30 -- # bdevperfpid=2677416 00:20:34.899 21:35:00 -- host/bdevperf.sh@32 -- # sleep 3 00:20:34.899 21:35:00 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:20:34.899 21:35:00 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:20:34.899 21:35:00 -- nvmf/common.sh@521 -- # config=() 00:20:34.899 21:35:00 -- nvmf/common.sh@521 -- # local subsystem config 00:20:34.899 21:35:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:34.899 21:35:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:34.899 { 00:20:34.899 "params": { 00:20:34.899 "name": "Nvme$subsystem", 00:20:34.899 "trtype": "$TEST_TRANSPORT", 00:20:34.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.899 "adrfam": "ipv4", 00:20:34.899 "trsvcid": "$NVMF_PORT", 00:20:34.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.899 "hdgst": ${hdgst:-false}, 00:20:34.899 "ddgst": ${ddgst:-false} 00:20:34.899 }, 00:20:34.899 "method": "bdev_nvme_attach_controller" 00:20:34.899 } 00:20:34.899 EOF 00:20:34.899 )") 00:20:34.899 21:35:00 -- nvmf/common.sh@543 -- # cat 00:20:34.899 21:35:00 -- nvmf/common.sh@545 -- # jq . 00:20:34.899 21:35:00 -- nvmf/common.sh@546 -- # IFS=, 00:20:34.899 21:35:00 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:34.899 "params": { 00:20:34.899 "name": "Nvme1", 00:20:34.899 "trtype": "tcp", 00:20:34.899 "traddr": "10.0.0.2", 00:20:34.899 "adrfam": "ipv4", 00:20:34.899 "trsvcid": "4420", 00:20:34.899 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.899 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:34.899 "hdgst": false, 00:20:34.899 "ddgst": false 00:20:34.899 }, 00:20:34.899 "method": "bdev_nvme_attach_controller" 00:20:34.899 }' 00:20:34.899 [2024-04-24 21:35:00.477734] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:20:34.899 [2024-04-24 21:35:00.477814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2677416 ] 00:20:34.899 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.899 [2024-04-24 21:35:00.540082] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.157 [2024-04-24 21:35:00.647596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.414 Running I/O for 15 seconds... 00:20:37.944 21:35:03 -- host/bdevperf.sh@33 -- # kill -9 2677129 00:20:37.944 21:35:03 -- host/bdevperf.sh@35 -- # sleep 3 00:20:37.944 [2024-04-24 21:35:03.451684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.944 [2024-04-24 21:35:03.451739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.944 [2024-04-24 21:35:03.451771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.944 [2024-04-24 21:35:03.451791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.944 [2024-04-24 21:35:03.451811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.944 [2024-04-24 21:35:03.451829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.944 [2024-04-24 21:35:03.451849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.944 [2024-04-24 21:35:03.451867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.944 [2024-04-24 21:35:03.451885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.944 [2024-04-24 21:35:03.451911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.944 [2024-04-24 21:35:03.451930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.451949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.451969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.451988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.945 [2024-04-24 21:35:03.452374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.452984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.452999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.453016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.453032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.453048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.453064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.453081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.453096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.453112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.453128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.453144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.453160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.453180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.453196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.945 [2024-04-24 21:35:03.453213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.945 [2024-04-24 21:35:03.453229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.453981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.453999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.454019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.454036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.454051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.454068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.454083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.454099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.454114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.454131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.454146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.454162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.454178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.454194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.454209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.454226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.454241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.454257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.454272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.454289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:127064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.454303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.454320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.454335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.454352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.454367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.454383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.454399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.454419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.454436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.454453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.454468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.454485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.946 [2024-04-24 21:35:03.454501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.946 [2024-04-24 21:35:03.454518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.454533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.454549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:127128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.454564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.454580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.454595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.454612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.454633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.454652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.454667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.454684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.454700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.454716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.454731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.454748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.454763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.454780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:127184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.454795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.454811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:127192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.454830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.454848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.454863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.454879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.454895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.454911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.454926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.454943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.454958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.454974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:127232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.454989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.455021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.455052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.455085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.455117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:127272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.455148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:127280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.455180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.455211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:127296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.455247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.455279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.455311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.455342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.455373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:127336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.455405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:127344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.455437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.455468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:127440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.947 [2024-04-24 21:35:03.455500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.947 [2024-04-24 21:35:03.455538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:127456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.947 [2024-04-24 21:35:03.455571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.947 [2024-04-24 21:35:03.455602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.947 [2024-04-24 21:35:03.455647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:127480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.947 [2024-04-24 21:35:03.455681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:127488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.947 [2024-04-24 21:35:03.455713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.947 [2024-04-24 21:35:03.455745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.947 [2024-04-24 21:35:03.455761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:127368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.948 [2024-04-24 21:35:03.455777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.948 [2024-04-24 21:35:03.455793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.948 [2024-04-24 21:35:03.455808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.948 [2024-04-24 21:35:03.455825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:127384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.948 [2024-04-24 21:35:03.455840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.948 [2024-04-24 21:35:03.455856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:127392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.948 [2024-04-24 21:35:03.455872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.948 [2024-04-24 21:35:03.455888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.948 [2024-04-24 21:35:03.455903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.948 [2024-04-24 21:35:03.455920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.948 [2024-04-24 21:35:03.455934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.948 [2024-04-24 21:35:03.455950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff5a0 is same with the state(5) to be set 00:20:37.948 [2024-04-24 21:35:03.455968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:37.948 [2024-04-24 21:35:03.455981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:37.948 [2024-04-24 21:35:03.455993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127416 len:8 PRP1 0x0 PRP2 0x0 00:20:37.948 [2024-04-24 21:35:03.456008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.948 [2024-04-24 21:35:03.456072] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dff5a0 was disconnected and freed. reset controller. 00:20:37.948 [2024-04-24 21:35:03.459857] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.948 [2024-04-24 21:35:03.459935] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:37.948 [2024-04-24 21:35:03.460697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.948 [2024-04-24 21:35:03.460897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.948 [2024-04-24 21:35:03.460940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:37.948 [2024-04-24 21:35:03.460958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:37.948 [2024-04-24 21:35:03.461196] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:37.948 [2024-04-24 21:35:03.461436] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.948 [2024-04-24 21:35:03.461460] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.948 [2024-04-24 21:35:03.461478] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.948 [2024-04-24 21:35:03.465022] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.948 [2024-04-24 21:35:03.473990] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.948 [2024-04-24 21:35:03.474463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.948 [2024-04-24 21:35:03.474707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.948 [2024-04-24 21:35:03.474737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:37.948 [2024-04-24 21:35:03.474756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:37.948 [2024-04-24 21:35:03.474992] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:37.948 [2024-04-24 21:35:03.475235] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.948 [2024-04-24 21:35:03.475261] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.948 [2024-04-24 21:35:03.475276] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.948 [2024-04-24 21:35:03.478814] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.948 [2024-04-24 21:35:03.487995] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.948 [2024-04-24 21:35:03.488480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.948 [2024-04-24 21:35:03.488786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.948 [2024-04-24 21:35:03.488817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:37.948 [2024-04-24 21:35:03.488836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:37.948 [2024-04-24 21:35:03.489080] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:37.948 [2024-04-24 21:35:03.489321] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.948 [2024-04-24 21:35:03.489346] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.948 [2024-04-24 21:35:03.489362] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.948 [2024-04-24 21:35:03.492894] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.948 [2024-04-24 21:35:03.501842] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.948 [2024-04-24 21:35:03.502462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.948 [2024-04-24 21:35:03.502735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.948 [2024-04-24 21:35:03.502765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:37.948 [2024-04-24 21:35:03.502783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:37.948 [2024-04-24 21:35:03.503019] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:37.948 [2024-04-24 21:35:03.503259] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.948 [2024-04-24 21:35:03.503284] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.948 [2024-04-24 21:35:03.503299] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.948 [2024-04-24 21:35:03.506836] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.948 [2024-04-24 21:35:03.515800] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.948 [2024-04-24 21:35:03.516399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.948 [2024-04-24 21:35:03.516648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.948 [2024-04-24 21:35:03.516676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:37.948 [2024-04-24 21:35:03.516694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:37.948 [2024-04-24 21:35:03.516931] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:37.948 [2024-04-24 21:35:03.517172] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.948 [2024-04-24 21:35:03.517197] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.948 [2024-04-24 21:35:03.517212] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.948 [2024-04-24 21:35:03.520750] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.948 [2024-04-24 21:35:03.529714] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.948 [2024-04-24 21:35:03.530258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.948 [2024-04-24 21:35:03.530469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.948 [2024-04-24 21:35:03.530497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:37.948 [2024-04-24 21:35:03.530515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:37.948 [2024-04-24 21:35:03.530759] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:37.948 [2024-04-24 21:35:03.531000] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.948 [2024-04-24 21:35:03.531024] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.948 [2024-04-24 21:35:03.531040] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.948 [2024-04-24 21:35:03.534567] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.948 [2024-04-24 21:35:03.543539] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.948 [2024-04-24 21:35:03.544032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.948 [2024-04-24 21:35:03.544256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.948 [2024-04-24 21:35:03.544293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:37.948 [2024-04-24 21:35:03.544313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:37.948 [2024-04-24 21:35:03.544549] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:37.948 [2024-04-24 21:35:03.544802] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.948 [2024-04-24 21:35:03.544827] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.948 [2024-04-24 21:35:03.544843] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.948 [2024-04-24 21:35:03.548374] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.948 [2024-04-24 21:35:03.557333] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.948 [2024-04-24 21:35:03.557767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.949 [2024-04-24 21:35:03.557994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.949 [2024-04-24 21:35:03.558044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:37.949 [2024-04-24 21:35:03.558063] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:37.949 [2024-04-24 21:35:03.558301] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:37.949 [2024-04-24 21:35:03.558543] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.949 [2024-04-24 21:35:03.558568] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.949 [2024-04-24 21:35:03.558583] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.949 [2024-04-24 21:35:03.562123] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.949 [2024-04-24 21:35:03.571291] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.949 [2024-04-24 21:35:03.571758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.949 [2024-04-24 21:35:03.571947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.949 [2024-04-24 21:35:03.571976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:37.949 [2024-04-24 21:35:03.571995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:37.949 [2024-04-24 21:35:03.572231] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:37.949 [2024-04-24 21:35:03.572474] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.949 [2024-04-24 21:35:03.572499] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.949 [2024-04-24 21:35:03.572515] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.949 [2024-04-24 21:35:03.576053] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.949 [2024-04-24 21:35:03.585222] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.949 [2024-04-24 21:35:03.585675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.949 [2024-04-24 21:35:03.585863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.949 [2024-04-24 21:35:03.585899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:37.949 [2024-04-24 21:35:03.585922] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:37.949 [2024-04-24 21:35:03.586159] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:37.949 [2024-04-24 21:35:03.586401] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.949 [2024-04-24 21:35:03.586425] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.949 [2024-04-24 21:35:03.586440] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.949 [2024-04-24 21:35:03.589987] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.949 [2024-04-24 21:35:03.599157] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.949 [2024-04-24 21:35:03.599642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.949 [2024-04-24 21:35:03.599856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.949 [2024-04-24 21:35:03.599885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:37.949 [2024-04-24 21:35:03.599903] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:37.949 [2024-04-24 21:35:03.600138] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:37.949 [2024-04-24 21:35:03.600380] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.949 [2024-04-24 21:35:03.600404] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.949 [2024-04-24 21:35:03.600420] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.949 [2024-04-24 21:35:03.603957] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.949 [2024-04-24 21:35:03.613129] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.949 [2024-04-24 21:35:03.613594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.949 [2024-04-24 21:35:03.613815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.949 [2024-04-24 21:35:03.613845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:37.949 [2024-04-24 21:35:03.613863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:37.949 [2024-04-24 21:35:03.614098] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:37.949 [2024-04-24 21:35:03.614340] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.949 [2024-04-24 21:35:03.614365] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.949 [2024-04-24 21:35:03.614381] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.949 [2024-04-24 21:35:03.617922] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.210 [2024-04-24 21:35:03.627100] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.210 [2024-04-24 21:35:03.627574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.210 [2024-04-24 21:35:03.627770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.210 [2024-04-24 21:35:03.627800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.210 [2024-04-24 21:35:03.627817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.210 [2024-04-24 21:35:03.628059] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.210 [2024-04-24 21:35:03.628300] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.210 [2024-04-24 21:35:03.628325] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.210 [2024-04-24 21:35:03.628341] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.210 [2024-04-24 21:35:03.631882] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.210 [2024-04-24 21:35:03.641111] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.210 [2024-04-24 21:35:03.641564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.210 [2024-04-24 21:35:03.641775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.210 [2024-04-24 21:35:03.641804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.210 [2024-04-24 21:35:03.641823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.210 [2024-04-24 21:35:03.642059] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.210 [2024-04-24 21:35:03.642300] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.210 [2024-04-24 21:35:03.642325] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.210 [2024-04-24 21:35:03.642340] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.210 [2024-04-24 21:35:03.645881] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.210 [2024-04-24 21:35:03.655069] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.210 [2024-04-24 21:35:03.655539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.210 [2024-04-24 21:35:03.655733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.210 [2024-04-24 21:35:03.655765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.210 [2024-04-24 21:35:03.655784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.210 [2024-04-24 21:35:03.656023] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.210 [2024-04-24 21:35:03.656265] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.210 [2024-04-24 21:35:03.656290] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.210 [2024-04-24 21:35:03.656305] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.210 [2024-04-24 21:35:03.659866] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.210 [2024-04-24 21:35:03.669090] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.210 [2024-04-24 21:35:03.669577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.210 [2024-04-24 21:35:03.669786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.210 [2024-04-24 21:35:03.669816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.210 [2024-04-24 21:35:03.669834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.210 [2024-04-24 21:35:03.670069] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.210 [2024-04-24 21:35:03.670317] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.210 [2024-04-24 21:35:03.670341] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.210 [2024-04-24 21:35:03.670356] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.210 [2024-04-24 21:35:03.673911] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.210 [2024-04-24 21:35:03.683111] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.210 [2024-04-24 21:35:03.683581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.210 [2024-04-24 21:35:03.683850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.210 [2024-04-24 21:35:03.683878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.210 [2024-04-24 21:35:03.683909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.210 [2024-04-24 21:35:03.684140] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.210 [2024-04-24 21:35:03.684340] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.210 [2024-04-24 21:35:03.684361] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.210 [2024-04-24 21:35:03.684374] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.210 [2024-04-24 21:35:03.687443] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.210 [2024-04-24 21:35:03.696352] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.210 [2024-04-24 21:35:03.696831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.210 [2024-04-24 21:35:03.697028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.210 [2024-04-24 21:35:03.697054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.210 [2024-04-24 21:35:03.697070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.210 [2024-04-24 21:35:03.697323] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.210 [2024-04-24 21:35:03.697522] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.210 [2024-04-24 21:35:03.697543] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.210 [2024-04-24 21:35:03.697557] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.210 [2024-04-24 21:35:03.700572] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.210 [2024-04-24 21:35:03.709686] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.210 [2024-04-24 21:35:03.710164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.210 [2024-04-24 21:35:03.710378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.210 [2024-04-24 21:35:03.710404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.210 [2024-04-24 21:35:03.710420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.210 [2024-04-24 21:35:03.710676] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.210 [2024-04-24 21:35:03.710924] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.210 [2024-04-24 21:35:03.710952] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.210 [2024-04-24 21:35:03.710967] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.210 [2024-04-24 21:35:03.714406] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.210 [2024-04-24 21:35:03.723490] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.210 [2024-04-24 21:35:03.723963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.210 [2024-04-24 21:35:03.724200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.210 [2024-04-24 21:35:03.724227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.210 [2024-04-24 21:35:03.724243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.210 [2024-04-24 21:35:03.724497] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.210 [2024-04-24 21:35:03.724766] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.210 [2024-04-24 21:35:03.724789] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.210 [2024-04-24 21:35:03.724803] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.210 [2024-04-24 21:35:03.728199] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.210 [2024-04-24 21:35:03.736761] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.210 [2024-04-24 21:35:03.737247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.210 [2024-04-24 21:35:03.737444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.210 [2024-04-24 21:35:03.737472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.211 [2024-04-24 21:35:03.737489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.211 [2024-04-24 21:35:03.737720] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.211 [2024-04-24 21:35:03.737946] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.211 [2024-04-24 21:35:03.737968] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.211 [2024-04-24 21:35:03.737981] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.211 [2024-04-24 21:35:03.741087] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.211 [2024-04-24 21:35:03.750060] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.211 [2024-04-24 21:35:03.750478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.211 [2024-04-24 21:35:03.750680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.211 [2024-04-24 21:35:03.750707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.211 [2024-04-24 21:35:03.750723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.211 [2024-04-24 21:35:03.750961] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.211 [2024-04-24 21:35:03.751171] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.211 [2024-04-24 21:35:03.751192] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.211 [2024-04-24 21:35:03.751209] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.211 [2024-04-24 21:35:03.754220] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.211 [2024-04-24 21:35:03.763346] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.211 [2024-04-24 21:35:03.763845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.211 [2024-04-24 21:35:03.764027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.211 [2024-04-24 21:35:03.764052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.211 [2024-04-24 21:35:03.764067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.211 [2024-04-24 21:35:03.764315] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.211 [2024-04-24 21:35:03.764510] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.211 [2024-04-24 21:35:03.764530] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.211 [2024-04-24 21:35:03.764543] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.211 [2024-04-24 21:35:03.767584] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.211 [2024-04-24 21:35:03.776505] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.211 [2024-04-24 21:35:03.776985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.211 [2024-04-24 21:35:03.777192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.211 [2024-04-24 21:35:03.777218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.211 [2024-04-24 21:35:03.777234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.211 [2024-04-24 21:35:03.777475] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.211 [2024-04-24 21:35:03.777703] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.211 [2024-04-24 21:35:03.777726] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.211 [2024-04-24 21:35:03.777739] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.211 [2024-04-24 21:35:03.780738] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.211 [2024-04-24 21:35:03.789806] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.211 [2024-04-24 21:35:03.790245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.211 [2024-04-24 21:35:03.790471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.211 [2024-04-24 21:35:03.790498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.211 [2024-04-24 21:35:03.790529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.211 [2024-04-24 21:35:03.790776] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.211 [2024-04-24 21:35:03.790996] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.211 [2024-04-24 21:35:03.791018] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.211 [2024-04-24 21:35:03.791030] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.211 [2024-04-24 21:35:03.793954] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.211 [2024-04-24 21:35:03.803067] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.211 [2024-04-24 21:35:03.803431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.211 [2024-04-24 21:35:03.803644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.211 [2024-04-24 21:35:03.803673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.211 [2024-04-24 21:35:03.803704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.211 [2024-04-24 21:35:03.803936] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.211 [2024-04-24 21:35:03.804129] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.211 [2024-04-24 21:35:03.804150] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.211 [2024-04-24 21:35:03.804163] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.211 [2024-04-24 21:35:03.807086] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.211 [2024-04-24 21:35:03.816212] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.211 [2024-04-24 21:35:03.816648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.211 [2024-04-24 21:35:03.816858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.211 [2024-04-24 21:35:03.816885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.211 [2024-04-24 21:35:03.816901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.211 [2024-04-24 21:35:03.817146] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.211 [2024-04-24 21:35:03.817339] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.211 [2024-04-24 21:35:03.817360] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.211 [2024-04-24 21:35:03.817372] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.211 [2024-04-24 21:35:03.820306] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.211 [2024-04-24 21:35:03.829474] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.211 [2024-04-24 21:35:03.829922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.211 [2024-04-24 21:35:03.830116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.211 [2024-04-24 21:35:03.830144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.211 [2024-04-24 21:35:03.830160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.211 [2024-04-24 21:35:03.830405] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.211 [2024-04-24 21:35:03.830599] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.211 [2024-04-24 21:35:03.830620] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.211 [2024-04-24 21:35:03.830659] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.211 [2024-04-24 21:35:03.833563] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.211 [2024-04-24 21:35:03.842726] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.211 [2024-04-24 21:35:03.843190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.211 [2024-04-24 21:35:03.843408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.211 [2024-04-24 21:35:03.843435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.211 [2024-04-24 21:35:03.843452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.211 [2024-04-24 21:35:03.843741] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.211 [2024-04-24 21:35:03.843944] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.211 [2024-04-24 21:35:03.843980] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.211 [2024-04-24 21:35:03.843993] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.211 [2024-04-24 21:35:03.846922] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.211 [2024-04-24 21:35:03.855900] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.211 [2024-04-24 21:35:03.856341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.212 [2024-04-24 21:35:03.856536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.212 [2024-04-24 21:35:03.856563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.212 [2024-04-24 21:35:03.856580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.212 [2024-04-24 21:35:03.856834] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.212 [2024-04-24 21:35:03.857044] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.212 [2024-04-24 21:35:03.857065] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.212 [2024-04-24 21:35:03.857078] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.212 [2024-04-24 21:35:03.860004] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.212 [2024-04-24 21:35:03.869161] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.212 [2024-04-24 21:35:03.869649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.212 [2024-04-24 21:35:03.869841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.212 [2024-04-24 21:35:03.869868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.212 [2024-04-24 21:35:03.869885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.212 [2024-04-24 21:35:03.870134] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.212 [2024-04-24 21:35:03.870327] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.212 [2024-04-24 21:35:03.870347] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.212 [2024-04-24 21:35:03.870360] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.212 [2024-04-24 21:35:03.873293] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.212 [2024-04-24 21:35:03.882744] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.212 [2024-04-24 21:35:03.883172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.212 [2024-04-24 21:35:03.883391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.212 [2024-04-24 21:35:03.883418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.212 [2024-04-24 21:35:03.883435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.212 [2024-04-24 21:35:03.883704] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.212 [2024-04-24 21:35:03.883933] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.212 [2024-04-24 21:35:03.883954] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.212 [2024-04-24 21:35:03.883968] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.471 [2024-04-24 21:35:03.887227] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.471 [2024-04-24 21:35:03.895957] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.471 [2024-04-24 21:35:03.896383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.471 [2024-04-24 21:35:03.896605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.471 [2024-04-24 21:35:03.896642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.471 [2024-04-24 21:35:03.896666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.471 [2024-04-24 21:35:03.896901] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.472 [2024-04-24 21:35:03.897110] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.472 [2024-04-24 21:35:03.897131] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.472 [2024-04-24 21:35:03.897144] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.472 [2024-04-24 21:35:03.900064] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.472 [2024-04-24 21:35:03.909185] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.472 [2024-04-24 21:35:03.909609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.472 [2024-04-24 21:35:03.909807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.472 [2024-04-24 21:35:03.909833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.472 [2024-04-24 21:35:03.909849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.472 [2024-04-24 21:35:03.910099] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.472 [2024-04-24 21:35:03.910292] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.472 [2024-04-24 21:35:03.910313] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.472 [2024-04-24 21:35:03.910326] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.472 [2024-04-24 21:35:03.913247] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.472 [2024-04-24 21:35:03.922358] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.472 [2024-04-24 21:35:03.922784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.472 [2024-04-24 21:35:03.922980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.472 [2024-04-24 21:35:03.923010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.472 [2024-04-24 21:35:03.923027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.472 [2024-04-24 21:35:03.923277] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.472 [2024-04-24 21:35:03.923470] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.472 [2024-04-24 21:35:03.923491] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.472 [2024-04-24 21:35:03.923503] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.472 [2024-04-24 21:35:03.926426] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.472 [2024-04-24 21:35:03.935566] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.472 [2024-04-24 21:35:03.936003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.472 [2024-04-24 21:35:03.936204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.472 [2024-04-24 21:35:03.936242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.472 [2024-04-24 21:35:03.936258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.472 [2024-04-24 21:35:03.936484] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.472 [2024-04-24 21:35:03.936711] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.472 [2024-04-24 21:35:03.936733] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.472 [2024-04-24 21:35:03.936747] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.472 [2024-04-24 21:35:03.939666] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.472 [2024-04-24 21:35:03.948841] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.472 [2024-04-24 21:35:03.949295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.472 [2024-04-24 21:35:03.949514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.472 [2024-04-24 21:35:03.949551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.472 [2024-04-24 21:35:03.949568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.472 [2024-04-24 21:35:03.949829] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.472 [2024-04-24 21:35:03.950043] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.472 [2024-04-24 21:35:03.950064] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.472 [2024-04-24 21:35:03.950075] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.472 [2024-04-24 21:35:03.952999] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.472 [2024-04-24 21:35:03.962112] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.472 [2024-04-24 21:35:03.962520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.472 [2024-04-24 21:35:03.962712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.472 [2024-04-24 21:35:03.962741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.472 [2024-04-24 21:35:03.962763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.472 [2024-04-24 21:35:03.963003] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.472 [2024-04-24 21:35:03.963216] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.472 [2024-04-24 21:35:03.963237] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.472 [2024-04-24 21:35:03.963250] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.472 [2024-04-24 21:35:03.966470] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.472 [2024-04-24 21:35:03.975736] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.472 [2024-04-24 21:35:03.976206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.472 [2024-04-24 21:35:03.976395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.472 [2024-04-24 21:35:03.976420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.472 [2024-04-24 21:35:03.976436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.472 [2024-04-24 21:35:03.976671] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.472 [2024-04-24 21:35:03.976894] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.472 [2024-04-24 21:35:03.976916] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.472 [2024-04-24 21:35:03.976930] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.472 [2024-04-24 21:35:03.980206] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.472 [2024-04-24 21:35:03.989155] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.472 [2024-04-24 21:35:03.989588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.472 [2024-04-24 21:35:03.989765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.472 [2024-04-24 21:35:03.989791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.472 [2024-04-24 21:35:03.989806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.472 [2024-04-24 21:35:03.990044] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.472 [2024-04-24 21:35:03.990242] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.472 [2024-04-24 21:35:03.990263] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.472 [2024-04-24 21:35:03.990276] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.472 [2024-04-24 21:35:03.993320] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.472 [2024-04-24 21:35:04.002339] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.472 [2024-04-24 21:35:04.002824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.472 [2024-04-24 21:35:04.003018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.472 [2024-04-24 21:35:04.003043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.472 [2024-04-24 21:35:04.003058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.472 [2024-04-24 21:35:04.003312] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.472 [2024-04-24 21:35:04.003503] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.472 [2024-04-24 21:35:04.003524] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.472 [2024-04-24 21:35:04.003536] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.472 [2024-04-24 21:35:04.006458] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.472 [2024-04-24 21:35:04.015565] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.472 [2024-04-24 21:35:04.015982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.472 [2024-04-24 21:35:04.016204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.472 [2024-04-24 21:35:04.016230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.472 [2024-04-24 21:35:04.016247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.472 [2024-04-24 21:35:04.016495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.472 [2024-04-24 21:35:04.016719] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.472 [2024-04-24 21:35:04.016741] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.472 [2024-04-24 21:35:04.016755] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.472 [2024-04-24 21:35:04.019671] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.473 [2024-04-24 21:35:04.028788] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.473 [2024-04-24 21:35:04.029227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.473 [2024-04-24 21:35:04.029449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.473 [2024-04-24 21:35:04.029476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.473 [2024-04-24 21:35:04.029492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.473 [2024-04-24 21:35:04.029757] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.473 [2024-04-24 21:35:04.029972] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.473 [2024-04-24 21:35:04.029993] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.473 [2024-04-24 21:35:04.030006] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.473 [2024-04-24 21:35:04.032909] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.473 [2024-04-24 21:35:04.042033] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.473 [2024-04-24 21:35:04.042516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.473 [2024-04-24 21:35:04.042739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.473 [2024-04-24 21:35:04.042765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.473 [2024-04-24 21:35:04.042781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.473 [2024-04-24 21:35:04.043034] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.473 [2024-04-24 21:35:04.043232] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.473 [2024-04-24 21:35:04.043253] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.473 [2024-04-24 21:35:04.043266] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.473 [2024-04-24 21:35:04.046185] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.473 [2024-04-24 21:35:04.055176] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.473 [2024-04-24 21:35:04.055684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.473 [2024-04-24 21:35:04.055876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.473 [2024-04-24 21:35:04.055901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.473 [2024-04-24 21:35:04.055932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.473 [2024-04-24 21:35:04.056176] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.473 [2024-04-24 21:35:04.056369] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.473 [2024-04-24 21:35:04.056389] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.473 [2024-04-24 21:35:04.056401] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.473 [2024-04-24 21:35:04.059329] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.473 [2024-04-24 21:35:04.068320] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.473 [2024-04-24 21:35:04.068745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.473 [2024-04-24 21:35:04.068937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.473 [2024-04-24 21:35:04.068963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.473 [2024-04-24 21:35:04.068979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.473 [2024-04-24 21:35:04.069224] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.473 [2024-04-24 21:35:04.069418] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.473 [2024-04-24 21:35:04.069438] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.473 [2024-04-24 21:35:04.069450] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.473 [2024-04-24 21:35:04.072376] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.473 [2024-04-24 21:35:04.081528] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.473 [2024-04-24 21:35:04.081927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.473 [2024-04-24 21:35:04.082133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.473 [2024-04-24 21:35:04.082158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.473 [2024-04-24 21:35:04.082174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.473 [2024-04-24 21:35:04.082405] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.473 [2024-04-24 21:35:04.082637] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.473 [2024-04-24 21:35:04.082667] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.473 [2024-04-24 21:35:04.082682] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.473 [2024-04-24 21:35:04.085581] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.473 [2024-04-24 21:35:04.094775] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.473 [2024-04-24 21:35:04.095165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.473 [2024-04-24 21:35:04.095356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.473 [2024-04-24 21:35:04.095380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.473 [2024-04-24 21:35:04.095410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.473 [2024-04-24 21:35:04.095625] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.473 [2024-04-24 21:35:04.095839] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.473 [2024-04-24 21:35:04.095860] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.473 [2024-04-24 21:35:04.095874] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.473 [2024-04-24 21:35:04.098790] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.473 [2024-04-24 21:35:04.108081] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.473 [2024-04-24 21:35:04.108501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.473 [2024-04-24 21:35:04.108728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.473 [2024-04-24 21:35:04.108755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.473 [2024-04-24 21:35:04.108786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.473 [2024-04-24 21:35:04.109028] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.473 [2024-04-24 21:35:04.109221] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.473 [2024-04-24 21:35:04.109241] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.473 [2024-04-24 21:35:04.109254] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.473 [2024-04-24 21:35:04.112174] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.473 [2024-04-24 21:35:04.121301] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.473 [2024-04-24 21:35:04.121855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.473 [2024-04-24 21:35:04.122048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.473 [2024-04-24 21:35:04.122073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.473 [2024-04-24 21:35:04.122089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.473 [2024-04-24 21:35:04.122351] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.473 [2024-04-24 21:35:04.122545] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.473 [2024-04-24 21:35:04.122565] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.473 [2024-04-24 21:35:04.122582] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.473 [2024-04-24 21:35:04.125506] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.473 [2024-04-24 21:35:04.134456] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.473 [2024-04-24 21:35:04.134907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.473 [2024-04-24 21:35:04.135094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.473 [2024-04-24 21:35:04.135120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.473 [2024-04-24 21:35:04.135137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.473 [2024-04-24 21:35:04.135384] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.473 [2024-04-24 21:35:04.135593] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.473 [2024-04-24 21:35:04.135638] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.473 [2024-04-24 21:35:04.135659] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.473 [2024-04-24 21:35:04.138558] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.733 [2024-04-24 21:35:04.148163] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.733 [2024-04-24 21:35:04.148574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.733 [2024-04-24 21:35:04.148774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.733 [2024-04-24 21:35:04.148800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.733 [2024-04-24 21:35:04.148817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.733 [2024-04-24 21:35:04.149046] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.733 [2024-04-24 21:35:04.149240] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.733 [2024-04-24 21:35:04.149261] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.733 [2024-04-24 21:35:04.149273] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.733 [2024-04-24 21:35:04.152320] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.733 [2024-04-24 21:35:04.161402] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.733 [2024-04-24 21:35:04.161858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.733 [2024-04-24 21:35:04.162088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.733 [2024-04-24 21:35:04.162114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.733 [2024-04-24 21:35:04.162130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.733 [2024-04-24 21:35:04.162374] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.733 [2024-04-24 21:35:04.162568] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.733 [2024-04-24 21:35:04.162588] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.733 [2024-04-24 21:35:04.162600] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.733 [2024-04-24 21:35:04.165538] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.733 [2024-04-24 21:35:04.174671] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.733 [2024-04-24 21:35:04.175067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.733 [2024-04-24 21:35:04.175310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.733 [2024-04-24 21:35:04.175336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.733 [2024-04-24 21:35:04.175352] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.733 [2024-04-24 21:35:04.175594] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.733 [2024-04-24 21:35:04.175838] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.733 [2024-04-24 21:35:04.175861] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.733 [2024-04-24 21:35:04.175875] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.733 [2024-04-24 21:35:04.178815] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.733 [2024-04-24 21:35:04.187889] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.733 [2024-04-24 21:35:04.188328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.734 [2024-04-24 21:35:04.188520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.734 [2024-04-24 21:35:04.188547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.734 [2024-04-24 21:35:04.188564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.734 [2024-04-24 21:35:04.188827] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.734 [2024-04-24 21:35:04.189039] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.734 [2024-04-24 21:35:04.189060] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.734 [2024-04-24 21:35:04.189072] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.734 [2024-04-24 21:35:04.191990] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.734 [2024-04-24 21:35:04.201104] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.734 [2024-04-24 21:35:04.201522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.734 [2024-04-24 21:35:04.201717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.734 [2024-04-24 21:35:04.201743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.734 [2024-04-24 21:35:04.201759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.734 [2024-04-24 21:35:04.202010] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.734 [2024-04-24 21:35:04.202202] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.734 [2024-04-24 21:35:04.202222] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.734 [2024-04-24 21:35:04.202234] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.734 [2024-04-24 21:35:04.205180] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.734 [2024-04-24 21:35:04.214308] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.734 [2024-04-24 21:35:04.214754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.734 [2024-04-24 21:35:04.214948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.734 [2024-04-24 21:35:04.214974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.734 [2024-04-24 21:35:04.214990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.734 [2024-04-24 21:35:04.215241] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.734 [2024-04-24 21:35:04.215468] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.734 [2024-04-24 21:35:04.215489] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.734 [2024-04-24 21:35:04.215503] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.734 [2024-04-24 21:35:04.218802] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.734 [2024-04-24 21:35:04.227901] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.734 [2024-04-24 21:35:04.228332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.734 [2024-04-24 21:35:04.228528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.734 [2024-04-24 21:35:04.228555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.734 [2024-04-24 21:35:04.228572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.734 [2024-04-24 21:35:04.228823] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.734 [2024-04-24 21:35:04.229042] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.734 [2024-04-24 21:35:04.229063] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.734 [2024-04-24 21:35:04.229077] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.734 [2024-04-24 21:35:04.232295] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.734 [2024-04-24 21:35:04.241144] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.734 [2024-04-24 21:35:04.241577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.734 [2024-04-24 21:35:04.241763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.734 [2024-04-24 21:35:04.241789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.734 [2024-04-24 21:35:04.241805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.734 [2024-04-24 21:35:04.242043] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.734 [2024-04-24 21:35:04.242252] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.734 [2024-04-24 21:35:04.242272] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.734 [2024-04-24 21:35:04.242285] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.734 [2024-04-24 21:35:04.245256] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.734 [2024-04-24 21:35:04.254323] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.734 [2024-04-24 21:35:04.254750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.734 [2024-04-24 21:35:04.254943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.734 [2024-04-24 21:35:04.254969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.734 [2024-04-24 21:35:04.254986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.734 [2024-04-24 21:35:04.255248] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.734 [2024-04-24 21:35:04.255440] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.734 [2024-04-24 21:35:04.255460] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.734 [2024-04-24 21:35:04.255473] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.734 [2024-04-24 21:35:04.258401] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.734 [2024-04-24 21:35:04.267578] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.734 [2024-04-24 21:35:04.267990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.734 [2024-04-24 21:35:04.268214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.734 [2024-04-24 21:35:04.268240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.734 [2024-04-24 21:35:04.268255] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.734 [2024-04-24 21:35:04.268469] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.734 [2024-04-24 21:35:04.268740] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.734 [2024-04-24 21:35:04.268763] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.734 [2024-04-24 21:35:04.268776] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.734 [2024-04-24 21:35:04.271635] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.734 [2024-04-24 21:35:04.280803] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.734 [2024-04-24 21:35:04.281271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.734 [2024-04-24 21:35:04.281492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.734 [2024-04-24 21:35:04.281518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.734 [2024-04-24 21:35:04.281534] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.734 [2024-04-24 21:35:04.281798] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.734 [2024-04-24 21:35:04.282011] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.734 [2024-04-24 21:35:04.282032] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.734 [2024-04-24 21:35:04.282045] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.734 [2024-04-24 21:35:04.285003] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.734 [2024-04-24 21:35:04.293985] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.734 [2024-04-24 21:35:04.294405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.734 [2024-04-24 21:35:04.294608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.734 [2024-04-24 21:35:04.294646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.734 [2024-04-24 21:35:04.294670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.734 [2024-04-24 21:35:04.294926] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.734 [2024-04-24 21:35:04.295137] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.734 [2024-04-24 21:35:04.295158] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.735 [2024-04-24 21:35:04.295171] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.735 [2024-04-24 21:35:04.298089] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.735 [2024-04-24 21:35:04.307232] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.735 [2024-04-24 21:35:04.307720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.735 [2024-04-24 21:35:04.307945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.735 [2024-04-24 21:35:04.307971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.735 [2024-04-24 21:35:04.307988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.735 [2024-04-24 21:35:04.308235] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.735 [2024-04-24 21:35:04.308428] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.735 [2024-04-24 21:35:04.308448] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.735 [2024-04-24 21:35:04.308461] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.735 [2024-04-24 21:35:04.311385] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.735 [2024-04-24 21:35:04.320493] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.735 [2024-04-24 21:35:04.320894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.735 [2024-04-24 21:35:04.321076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.735 [2024-04-24 21:35:04.321101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.735 [2024-04-24 21:35:04.321116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.735 [2024-04-24 21:35:04.321348] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.735 [2024-04-24 21:35:04.321555] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.735 [2024-04-24 21:35:04.321576] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.735 [2024-04-24 21:35:04.321589] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.735 [2024-04-24 21:35:04.324896] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.735 [2024-04-24 21:35:04.334373] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.735 [2024-04-24 21:35:04.334868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.735 [2024-04-24 21:35:04.335038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.735 [2024-04-24 21:35:04.335063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.735 [2024-04-24 21:35:04.335082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.735 [2024-04-24 21:35:04.335311] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.735 [2024-04-24 21:35:04.335555] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.735 [2024-04-24 21:35:04.335579] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.735 [2024-04-24 21:35:04.335595] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.735 [2024-04-24 21:35:04.339072] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.735 [2024-04-24 21:35:04.348295] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.735 [2024-04-24 21:35:04.348766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.735 [2024-04-24 21:35:04.349009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.735 [2024-04-24 21:35:04.349038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.735 [2024-04-24 21:35:04.349057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.735 [2024-04-24 21:35:04.349293] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.735 [2024-04-24 21:35:04.349535] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.735 [2024-04-24 21:35:04.349560] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.735 [2024-04-24 21:35:04.349576] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.735 [2024-04-24 21:35:04.353064] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.735 [2024-04-24 21:35:04.362208] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.735 [2024-04-24 21:35:04.362676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.735 [2024-04-24 21:35:04.363007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.735 [2024-04-24 21:35:04.363055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.735 [2024-04-24 21:35:04.363073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.735 [2024-04-24 21:35:04.363310] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.735 [2024-04-24 21:35:04.363552] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.735 [2024-04-24 21:35:04.363577] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.735 [2024-04-24 21:35:04.363593] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.735 [2024-04-24 21:35:04.367140] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.735 [2024-04-24 21:35:04.376106] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.735 [2024-04-24 21:35:04.376649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.735 [2024-04-24 21:35:04.376860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.735 [2024-04-24 21:35:04.376891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.735 [2024-04-24 21:35:04.376909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.735 [2024-04-24 21:35:04.377152] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.735 [2024-04-24 21:35:04.377394] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.735 [2024-04-24 21:35:04.377419] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.735 [2024-04-24 21:35:04.377434] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.735 [2024-04-24 21:35:04.380981] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.735 [2024-04-24 21:35:04.389966] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.735 [2024-04-24 21:35:04.390433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.735 [2024-04-24 21:35:04.390714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.735 [2024-04-24 21:35:04.390771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.735 [2024-04-24 21:35:04.390789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.735 [2024-04-24 21:35:04.391026] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.735 [2024-04-24 21:35:04.391268] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.735 [2024-04-24 21:35:04.391293] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.735 [2024-04-24 21:35:04.391309] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.735 [2024-04-24 21:35:04.394861] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.735 [2024-04-24 21:35:04.403874] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.735 [2024-04-24 21:35:04.404316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.735 [2024-04-24 21:35:04.404528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.735 [2024-04-24 21:35:04.404556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.735 [2024-04-24 21:35:04.404574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.735 [2024-04-24 21:35:04.404829] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.735 [2024-04-24 21:35:04.405072] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.735 [2024-04-24 21:35:04.405098] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.735 [2024-04-24 21:35:04.405114] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.735 [2024-04-24 21:35:04.408658] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.995 [2024-04-24 21:35:04.417836] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.995 [2024-04-24 21:35:04.418281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.995 [2024-04-24 21:35:04.418679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.995 [2024-04-24 21:35:04.418712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.995 [2024-04-24 21:35:04.418730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.995 [2024-04-24 21:35:04.418967] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.995 [2024-04-24 21:35:04.419215] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.995 [2024-04-24 21:35:04.419240] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.995 [2024-04-24 21:35:04.419256] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.995 [2024-04-24 21:35:04.422805] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.995 [2024-04-24 21:35:04.431785] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.995 [2024-04-24 21:35:04.432255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.995 [2024-04-24 21:35:04.432460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.995 [2024-04-24 21:35:04.432488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.995 [2024-04-24 21:35:04.432506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.995 [2024-04-24 21:35:04.432762] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.995 [2024-04-24 21:35:04.433006] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.995 [2024-04-24 21:35:04.433032] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.995 [2024-04-24 21:35:04.433048] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.995 [2024-04-24 21:35:04.436577] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.995 [2024-04-24 21:35:04.445776] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.995 [2024-04-24 21:35:04.446238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.995 [2024-04-24 21:35:04.446539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.995 [2024-04-24 21:35:04.446597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.995 [2024-04-24 21:35:04.446615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.995 [2024-04-24 21:35:04.446868] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.995 [2024-04-24 21:35:04.447112] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.996 [2024-04-24 21:35:04.447138] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.996 [2024-04-24 21:35:04.447154] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.996 [2024-04-24 21:35:04.450699] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.996 [2024-04-24 21:35:04.459690] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.996 [2024-04-24 21:35:04.460150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.996 [2024-04-24 21:35:04.460459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.996 [2024-04-24 21:35:04.460487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.996 [2024-04-24 21:35:04.460504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.996 [2024-04-24 21:35:04.460762] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.996 [2024-04-24 21:35:04.461006] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.996 [2024-04-24 21:35:04.461038] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.996 [2024-04-24 21:35:04.461056] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.996 [2024-04-24 21:35:04.464585] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.996 [2024-04-24 21:35:04.473595] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.996 [2024-04-24 21:35:04.474076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.996 [2024-04-24 21:35:04.474309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.996 [2024-04-24 21:35:04.474338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.996 [2024-04-24 21:35:04.474356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.996 [2024-04-24 21:35:04.474593] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.996 [2024-04-24 21:35:04.474844] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.996 [2024-04-24 21:35:04.474871] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.996 [2024-04-24 21:35:04.474887] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.996 [2024-04-24 21:35:04.478428] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.996 [2024-04-24 21:35:04.487425] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.996 [2024-04-24 21:35:04.487856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.996 [2024-04-24 21:35:04.488096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.996 [2024-04-24 21:35:04.488126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.996 [2024-04-24 21:35:04.488144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.996 [2024-04-24 21:35:04.488381] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.996 [2024-04-24 21:35:04.488623] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.996 [2024-04-24 21:35:04.488664] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.996 [2024-04-24 21:35:04.488681] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.996 [2024-04-24 21:35:04.492436] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.996 [2024-04-24 21:35:04.501418] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.996 [2024-04-24 21:35:04.501899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.996 [2024-04-24 21:35:04.502110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.996 [2024-04-24 21:35:04.502138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.996 [2024-04-24 21:35:04.502156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.996 [2024-04-24 21:35:04.502392] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.996 [2024-04-24 21:35:04.502643] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.996 [2024-04-24 21:35:04.502668] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.996 [2024-04-24 21:35:04.502691] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.996 [2024-04-24 21:35:04.506237] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.996 [2024-04-24 21:35:04.515422] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.996 [2024-04-24 21:35:04.515897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.996 [2024-04-24 21:35:04.516083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.996 [2024-04-24 21:35:04.516110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.996 [2024-04-24 21:35:04.516128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.996 [2024-04-24 21:35:04.516364] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.996 [2024-04-24 21:35:04.516606] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.996 [2024-04-24 21:35:04.516645] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.996 [2024-04-24 21:35:04.516671] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.996 [2024-04-24 21:35:04.520203] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.996 [2024-04-24 21:35:04.529377] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.996 [2024-04-24 21:35:04.529826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.996 [2024-04-24 21:35:04.530035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.996 [2024-04-24 21:35:04.530065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.996 [2024-04-24 21:35:04.530083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.996 [2024-04-24 21:35:04.530318] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.996 [2024-04-24 21:35:04.530560] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.996 [2024-04-24 21:35:04.530585] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.996 [2024-04-24 21:35:04.530600] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.996 [2024-04-24 21:35:04.534146] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.996 [2024-04-24 21:35:04.543329] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.996 [2024-04-24 21:35:04.543792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.996 [2024-04-24 21:35:04.544024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.996 [2024-04-24 21:35:04.544064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.996 [2024-04-24 21:35:04.544082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.996 [2024-04-24 21:35:04.544318] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.996 [2024-04-24 21:35:04.544560] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.996 [2024-04-24 21:35:04.544585] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.996 [2024-04-24 21:35:04.544601] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.996 [2024-04-24 21:35:04.548154] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.996 [2024-04-24 21:35:04.557125] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.996 [2024-04-24 21:35:04.557729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.996 [2024-04-24 21:35:04.557953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.996 [2024-04-24 21:35:04.557982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.996 [2024-04-24 21:35:04.558000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.996 [2024-04-24 21:35:04.558237] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.996 [2024-04-24 21:35:04.558479] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.996 [2024-04-24 21:35:04.558504] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.996 [2024-04-24 21:35:04.558519] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.996 [2024-04-24 21:35:04.562067] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.996 [2024-04-24 21:35:04.571038] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.996 [2024-04-24 21:35:04.571504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.996 [2024-04-24 21:35:04.571814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.996 [2024-04-24 21:35:04.571882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.996 [2024-04-24 21:35:04.571901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.996 [2024-04-24 21:35:04.572137] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.996 [2024-04-24 21:35:04.572379] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.996 [2024-04-24 21:35:04.572405] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.996 [2024-04-24 21:35:04.572421] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.996 [2024-04-24 21:35:04.575970] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.996 [2024-04-24 21:35:04.584953] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.997 [2024-04-24 21:35:04.585431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.997 [2024-04-24 21:35:04.585646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.997 [2024-04-24 21:35:04.585678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.997 [2024-04-24 21:35:04.585696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.997 [2024-04-24 21:35:04.585932] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.997 [2024-04-24 21:35:04.586174] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.997 [2024-04-24 21:35:04.586200] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.997 [2024-04-24 21:35:04.586215] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.997 [2024-04-24 21:35:04.589773] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.997 [2024-04-24 21:35:04.598750] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.997 [2024-04-24 21:35:04.599214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.997 [2024-04-24 21:35:04.599418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.997 [2024-04-24 21:35:04.599446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.997 [2024-04-24 21:35:04.599464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.997 [2024-04-24 21:35:04.599719] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.997 [2024-04-24 21:35:04.599962] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.997 [2024-04-24 21:35:04.599988] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.997 [2024-04-24 21:35:04.600003] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.997 [2024-04-24 21:35:04.603536] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.997 [2024-04-24 21:35:04.612730] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.997 [2024-04-24 21:35:04.613193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.997 [2024-04-24 21:35:04.613425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.997 [2024-04-24 21:35:04.613454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.997 [2024-04-24 21:35:04.613472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.997 [2024-04-24 21:35:04.613729] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.997 [2024-04-24 21:35:04.613972] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.997 [2024-04-24 21:35:04.613998] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.997 [2024-04-24 21:35:04.614014] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.997 [2024-04-24 21:35:04.617545] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.997 [2024-04-24 21:35:04.626514] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.997 [2024-04-24 21:35:04.626997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.997 [2024-04-24 21:35:04.627376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.997 [2024-04-24 21:35:04.627428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.997 [2024-04-24 21:35:04.627445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.997 [2024-04-24 21:35:04.627701] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.997 [2024-04-24 21:35:04.627943] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.997 [2024-04-24 21:35:04.627968] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.997 [2024-04-24 21:35:04.627984] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.997 [2024-04-24 21:35:04.631513] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.997 [2024-04-24 21:35:04.640492] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.997 [2024-04-24 21:35:04.640955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.997 [2024-04-24 21:35:04.641163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.997 [2024-04-24 21:35:04.641193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.997 [2024-04-24 21:35:04.641211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.997 [2024-04-24 21:35:04.641447] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.997 [2024-04-24 21:35:04.641710] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.997 [2024-04-24 21:35:04.641737] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.997 [2024-04-24 21:35:04.641754] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.997 [2024-04-24 21:35:04.645283] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.997 [2024-04-24 21:35:04.654457] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.997 [2024-04-24 21:35:04.654939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.997 [2024-04-24 21:35:04.655177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.997 [2024-04-24 21:35:04.655206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.997 [2024-04-24 21:35:04.655223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.997 [2024-04-24 21:35:04.655460] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.997 [2024-04-24 21:35:04.655721] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.997 [2024-04-24 21:35:04.655749] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.997 [2024-04-24 21:35:04.655765] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.997 [2024-04-24 21:35:04.659298] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.997 [2024-04-24 21:35:04.668265] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.997 [2024-04-24 21:35:04.668709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.997 [2024-04-24 21:35:04.668950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.997 [2024-04-24 21:35:04.668980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:38.997 [2024-04-24 21:35:04.668998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:38.997 [2024-04-24 21:35:04.669235] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:38.997 [2024-04-24 21:35:04.669478] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.997 [2024-04-24 21:35:04.669503] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.997 [2024-04-24 21:35:04.669519] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.257 [2024-04-24 21:35:04.673065] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.257 [2024-04-24 21:35:04.682243] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.257 [2024-04-24 21:35:04.682723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-24 21:35:04.683091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-24 21:35:04.683145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.257 [2024-04-24 21:35:04.683163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.257 [2024-04-24 21:35:04.683399] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.257 [2024-04-24 21:35:04.683657] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.257 [2024-04-24 21:35:04.683684] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.257 [2024-04-24 21:35:04.683699] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.257 [2024-04-24 21:35:04.687235] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.257 [2024-04-24 21:35:04.696203] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.257 [2024-04-24 21:35:04.696643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-24 21:35:04.697025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-24 21:35:04.697079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.257 [2024-04-24 21:35:04.697096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.257 [2024-04-24 21:35:04.697332] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.257 [2024-04-24 21:35:04.697572] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.257 [2024-04-24 21:35:04.697597] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.257 [2024-04-24 21:35:04.697612] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.257 [2024-04-24 21:35:04.701160] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.257 [2024-04-24 21:35:04.710129] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.257 [2024-04-24 21:35:04.710597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-24 21:35:04.710855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-24 21:35:04.710886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.257 [2024-04-24 21:35:04.710905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.257 [2024-04-24 21:35:04.711141] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.257 [2024-04-24 21:35:04.711383] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.257 [2024-04-24 21:35:04.711408] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.257 [2024-04-24 21:35:04.711424] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.257 [2024-04-24 21:35:04.714975] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.257 [2024-04-24 21:35:04.723978] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.257 [2024-04-24 21:35:04.724445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-24 21:35:04.724655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-24 21:35:04.724686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.257 [2024-04-24 21:35:04.724711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.257 [2024-04-24 21:35:04.724949] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.258 [2024-04-24 21:35:04.725191] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.258 [2024-04-24 21:35:04.725216] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.258 [2024-04-24 21:35:04.725232] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.258 [2024-04-24 21:35:04.728779] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.258 [2024-04-24 21:35:04.737965] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.258 [2024-04-24 21:35:04.738498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-24 21:35:04.738754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-24 21:35:04.738786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.258 [2024-04-24 21:35:04.738804] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.258 [2024-04-24 21:35:04.739042] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.258 [2024-04-24 21:35:04.739285] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.258 [2024-04-24 21:35:04.739309] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.258 [2024-04-24 21:35:04.739325] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.258 [2024-04-24 21:35:04.742868] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.258 [2024-04-24 21:35:04.751868] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.258 [2024-04-24 21:35:04.752407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-24 21:35:04.752610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-24 21:35:04.752649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.258 [2024-04-24 21:35:04.752669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.258 [2024-04-24 21:35:04.752905] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.258 [2024-04-24 21:35:04.753146] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.258 [2024-04-24 21:35:04.753170] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.258 [2024-04-24 21:35:04.753186] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.258 [2024-04-24 21:35:04.756737] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.258 [2024-04-24 21:35:04.765722] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.258 [2024-04-24 21:35:04.766363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-24 21:35:04.766742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-24 21:35:04.766773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.258 [2024-04-24 21:35:04.766791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.258 [2024-04-24 21:35:04.767033] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.258 [2024-04-24 21:35:04.767274] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.258 [2024-04-24 21:35:04.767299] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.258 [2024-04-24 21:35:04.767315] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.258 [2024-04-24 21:35:04.770862] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.258 [2024-04-24 21:35:04.779694] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.258 [2024-04-24 21:35:04.780287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-24 21:35:04.780724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-24 21:35:04.780755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.258 [2024-04-24 21:35:04.780773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.258 [2024-04-24 21:35:04.781009] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.258 [2024-04-24 21:35:04.781251] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.258 [2024-04-24 21:35:04.781275] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.258 [2024-04-24 21:35:04.781291] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.258 [2024-04-24 21:35:04.784837] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.258 [2024-04-24 21:35:04.793645] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.258 [2024-04-24 21:35:04.794117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-24 21:35:04.794322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-24 21:35:04.794350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.258 [2024-04-24 21:35:04.794368] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.258 [2024-04-24 21:35:04.794609] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.258 [2024-04-24 21:35:04.794860] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.258 [2024-04-24 21:35:04.794890] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.258 [2024-04-24 21:35:04.794905] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.258 [2024-04-24 21:35:04.798440] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.258 [2024-04-24 21:35:04.807678] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.258 [2024-04-24 21:35:04.808295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-24 21:35:04.808738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-24 21:35:04.808769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.258 [2024-04-24 21:35:04.808787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.258 [2024-04-24 21:35:04.809023] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.258 [2024-04-24 21:35:04.809273] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.258 [2024-04-24 21:35:04.809298] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.258 [2024-04-24 21:35:04.809313] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.258 [2024-04-24 21:35:04.812860] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.259 [2024-04-24 21:35:04.821651] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.259 [2024-04-24 21:35:04.822164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-24 21:35:04.822361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-24 21:35:04.822391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.259 [2024-04-24 21:35:04.822409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.259 [2024-04-24 21:35:04.822657] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.259 [2024-04-24 21:35:04.822899] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.259 [2024-04-24 21:35:04.822924] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.259 [2024-04-24 21:35:04.822939] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.259 [2024-04-24 21:35:04.826483] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.259 [2024-04-24 21:35:04.835470] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.259 [2024-04-24 21:35:04.835930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-24 21:35:04.836257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-24 21:35:04.836317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.259 [2024-04-24 21:35:04.836335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.259 [2024-04-24 21:35:04.836572] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.259 [2024-04-24 21:35:04.836823] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.259 [2024-04-24 21:35:04.836848] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.259 [2024-04-24 21:35:04.836864] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.259 [2024-04-24 21:35:04.840400] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.259 [2024-04-24 21:35:04.849381] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.259 [2024-04-24 21:35:04.849831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-24 21:35:04.850054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-24 21:35:04.850082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.259 [2024-04-24 21:35:04.850100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.259 [2024-04-24 21:35:04.850335] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.259 [2024-04-24 21:35:04.850577] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.259 [2024-04-24 21:35:04.850607] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.259 [2024-04-24 21:35:04.850624] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.259 [2024-04-24 21:35:04.854176] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.259 [2024-04-24 21:35:04.863367] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.259 [2024-04-24 21:35:04.863809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-24 21:35:04.863999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-24 21:35:04.864028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.259 [2024-04-24 21:35:04.864046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.259 [2024-04-24 21:35:04.864281] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.259 [2024-04-24 21:35:04.864522] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.259 [2024-04-24 21:35:04.864546] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.259 [2024-04-24 21:35:04.864562] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.259 [2024-04-24 21:35:04.868108] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.259 [2024-04-24 21:35:04.877297] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.259 [2024-04-24 21:35:04.877741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-24 21:35:04.878030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-24 21:35:04.878077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.259 [2024-04-24 21:35:04.878096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.259 [2024-04-24 21:35:04.878332] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.259 [2024-04-24 21:35:04.878574] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.259 [2024-04-24 21:35:04.878599] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.259 [2024-04-24 21:35:04.878615] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.259 [2024-04-24 21:35:04.882188] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.259 [2024-04-24 21:35:04.891182] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.259 [2024-04-24 21:35:04.891657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-24 21:35:04.891850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-24 21:35:04.891881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.259 [2024-04-24 21:35:04.891899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.259 [2024-04-24 21:35:04.892137] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.259 [2024-04-24 21:35:04.892379] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.259 [2024-04-24 21:35:04.892403] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.259 [2024-04-24 21:35:04.892425] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.259 [2024-04-24 21:35:04.895975] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.259 [2024-04-24 21:35:04.905179] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.259 [2024-04-24 21:35:04.905619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-24 21:35:04.905858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-24 21:35:04.905887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.260 [2024-04-24 21:35:04.905913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.260 [2024-04-24 21:35:04.906149] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.260 [2024-04-24 21:35:04.906392] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.260 [2024-04-24 21:35:04.906417] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.260 [2024-04-24 21:35:04.906433] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.260 [2024-04-24 21:35:04.909989] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.260 [2024-04-24 21:35:04.919195] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.260 [2024-04-24 21:35:04.919673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-24 21:35:04.919928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-24 21:35:04.919981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.260 [2024-04-24 21:35:04.919999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.260 [2024-04-24 21:35:04.920236] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.260 [2024-04-24 21:35:04.920477] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.260 [2024-04-24 21:35:04.920502] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.260 [2024-04-24 21:35:04.920517] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.260 [2024-04-24 21:35:04.924072] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.519 [2024-04-24 21:35:04.933058] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.519 [2024-04-24 21:35:04.933541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.520 [2024-04-24 21:35:04.933782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.520 [2024-04-24 21:35:04.933812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.520 [2024-04-24 21:35:04.933831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.520 [2024-04-24 21:35:04.934066] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.520 [2024-04-24 21:35:04.934309] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.520 [2024-04-24 21:35:04.934334] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.520 [2024-04-24 21:35:04.934350] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.520 [2024-04-24 21:35:04.937906] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.520 [2024-04-24 21:35:04.946890] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.520 [2024-04-24 21:35:04.947353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.520 [2024-04-24 21:35:04.947562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.520 [2024-04-24 21:35:04.947590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.520 [2024-04-24 21:35:04.947609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.520 [2024-04-24 21:35:04.947861] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.520 [2024-04-24 21:35:04.948105] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.520 [2024-04-24 21:35:04.948130] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.520 [2024-04-24 21:35:04.948146] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.520 [2024-04-24 21:35:04.951691] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.520 [2024-04-24 21:35:04.960860] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.520 [2024-04-24 21:35:04.961328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.520 [2024-04-24 21:35:04.961539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.520 [2024-04-24 21:35:04.961569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.520 [2024-04-24 21:35:04.961587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.520 [2024-04-24 21:35:04.961845] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.520 [2024-04-24 21:35:04.962089] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.520 [2024-04-24 21:35:04.962115] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.520 [2024-04-24 21:35:04.962131] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.520 [2024-04-24 21:35:04.965674] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.520 [2024-04-24 21:35:04.974675] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.520 [2024-04-24 21:35:04.975119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.520 [2024-04-24 21:35:04.975400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.520 [2024-04-24 21:35:04.975455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.520 [2024-04-24 21:35:04.975474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.520 [2024-04-24 21:35:04.975723] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.520 [2024-04-24 21:35:04.975964] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.520 [2024-04-24 21:35:04.975989] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.520 [2024-04-24 21:35:04.976005] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.520 [2024-04-24 21:35:04.979545] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.520 [2024-04-24 21:35:04.988549] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.520 [2024-04-24 21:35:04.989038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.520 [2024-04-24 21:35:04.989402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.520 [2024-04-24 21:35:04.989433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.520 [2024-04-24 21:35:04.989451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.520 [2024-04-24 21:35:04.989700] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.520 [2024-04-24 21:35:04.989943] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.520 [2024-04-24 21:35:04.989968] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.520 [2024-04-24 21:35:04.989985] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.520 [2024-04-24 21:35:04.993531] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.520 [2024-04-24 21:35:05.002507] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.520 [2024-04-24 21:35:05.002960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.520 [2024-04-24 21:35:05.003177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.520 [2024-04-24 21:35:05.003209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.520 [2024-04-24 21:35:05.003227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.520 [2024-04-24 21:35:05.003464] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.520 [2024-04-24 21:35:05.003726] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.520 [2024-04-24 21:35:05.003753] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.520 [2024-04-24 21:35:05.003768] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.520 [2024-04-24 21:35:05.007296] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.520 [2024-04-24 21:35:05.016479] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.520 [2024-04-24 21:35:05.016958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.520 [2024-04-24 21:35:05.017279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.520 [2024-04-24 21:35:05.017333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.520 [2024-04-24 21:35:05.017351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.520 [2024-04-24 21:35:05.017587] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.520 [2024-04-24 21:35:05.017847] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.520 [2024-04-24 21:35:05.017875] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.520 [2024-04-24 21:35:05.017891] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.520 [2024-04-24 21:35:05.021422] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.520 [2024-04-24 21:35:05.030403] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.520 [2024-04-24 21:35:05.030864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.520 [2024-04-24 21:35:05.031232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.520 [2024-04-24 21:35:05.031284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.520 [2024-04-24 21:35:05.031303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.520 [2024-04-24 21:35:05.031540] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.520 [2024-04-24 21:35:05.031802] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.520 [2024-04-24 21:35:05.031829] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.520 [2024-04-24 21:35:05.031845] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.520 [2024-04-24 21:35:05.035376] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.520 [2024-04-24 21:35:05.044347] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.520 [2024-04-24 21:35:05.044824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.520 [2024-04-24 21:35:05.045196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.520 [2024-04-24 21:35:05.045252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.520 [2024-04-24 21:35:05.045270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.520 [2024-04-24 21:35:05.045506] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.520 [2024-04-24 21:35:05.045768] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.520 [2024-04-24 21:35:05.045795] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.520 [2024-04-24 21:35:05.045811] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.520 [2024-04-24 21:35:05.049341] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.520 [2024-04-24 21:35:05.058315] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.520 [2024-04-24 21:35:05.058762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.520 [2024-04-24 21:35:05.058978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.521 [2024-04-24 21:35:05.059008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.521 [2024-04-24 21:35:05.059027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.521 [2024-04-24 21:35:05.059263] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.521 [2024-04-24 21:35:05.059506] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.521 [2024-04-24 21:35:05.059531] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.521 [2024-04-24 21:35:05.059546] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.521 [2024-04-24 21:35:05.063095] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.521 [2024-04-24 21:35:05.072271] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.521 [2024-04-24 21:35:05.072722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.521 [2024-04-24 21:35:05.072987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.521 [2024-04-24 21:35:05.073017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.521 [2024-04-24 21:35:05.073036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.521 [2024-04-24 21:35:05.073273] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.521 [2024-04-24 21:35:05.073515] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.521 [2024-04-24 21:35:05.073540] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.521 [2024-04-24 21:35:05.073556] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.521 [2024-04-24 21:35:05.077108] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.521 [2024-04-24 21:35:05.086080] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.521 [2024-04-24 21:35:05.086521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.521 [2024-04-24 21:35:05.086762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.521 [2024-04-24 21:35:05.086794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.521 [2024-04-24 21:35:05.086812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.521 [2024-04-24 21:35:05.087049] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.521 [2024-04-24 21:35:05.087291] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.521 [2024-04-24 21:35:05.087316] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.521 [2024-04-24 21:35:05.087332] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.521 [2024-04-24 21:35:05.090884] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.521 [2024-04-24 21:35:05.100077] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.521 [2024-04-24 21:35:05.100558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.521 [2024-04-24 21:35:05.100894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.521 [2024-04-24 21:35:05.100949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.521 [2024-04-24 21:35:05.100968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.521 [2024-04-24 21:35:05.101205] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.521 [2024-04-24 21:35:05.101447] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.521 [2024-04-24 21:35:05.101472] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.521 [2024-04-24 21:35:05.101488] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.521 [2024-04-24 21:35:05.105047] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.521 [2024-04-24 21:35:05.114032] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.521 [2024-04-24 21:35:05.114477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.521 [2024-04-24 21:35:05.114782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.521 [2024-04-24 21:35:05.114814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.521 [2024-04-24 21:35:05.114840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.521 [2024-04-24 21:35:05.115077] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.521 [2024-04-24 21:35:05.115319] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.521 [2024-04-24 21:35:05.115345] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.521 [2024-04-24 21:35:05.115360] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.521 [2024-04-24 21:35:05.118910] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.521 [2024-04-24 21:35:05.127885] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.521 [2024-04-24 21:35:05.128352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.521 [2024-04-24 21:35:05.128552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.521 [2024-04-24 21:35:05.128580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.521 [2024-04-24 21:35:05.128598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.521 [2024-04-24 21:35:05.128852] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.521 [2024-04-24 21:35:05.129095] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.521 [2024-04-24 21:35:05.129120] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.521 [2024-04-24 21:35:05.129135] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.521 [2024-04-24 21:35:05.132680] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.521 [2024-04-24 21:35:05.141851] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.521 [2024-04-24 21:35:05.142316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.521 [2024-04-24 21:35:05.142525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.521 [2024-04-24 21:35:05.142554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.521 [2024-04-24 21:35:05.142572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.521 [2024-04-24 21:35:05.142828] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.521 [2024-04-24 21:35:05.143080] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.521 [2024-04-24 21:35:05.143105] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.521 [2024-04-24 21:35:05.143119] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.521 [2024-04-24 21:35:05.146664] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.521 [2024-04-24 21:35:05.155833] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.521 [2024-04-24 21:35:05.156307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.521 [2024-04-24 21:35:05.156533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.521 [2024-04-24 21:35:05.156562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.521 [2024-04-24 21:35:05.156580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.521 [2024-04-24 21:35:05.156842] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.521 [2024-04-24 21:35:05.157085] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.521 [2024-04-24 21:35:05.157110] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.521 [2024-04-24 21:35:05.157125] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.521 [2024-04-24 21:35:05.160670] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.521 [2024-04-24 21:35:05.169621] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.521 [2024-04-24 21:35:05.170165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.521 [2024-04-24 21:35:05.170379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.521 [2024-04-24 21:35:05.170408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.521 [2024-04-24 21:35:05.170426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.521 [2024-04-24 21:35:05.170682] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.521 [2024-04-24 21:35:05.170926] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.521 [2024-04-24 21:35:05.170951] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.521 [2024-04-24 21:35:05.170967] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.521 [2024-04-24 21:35:05.174497] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.521 [2024-04-24 21:35:05.183472] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.521 [2024-04-24 21:35:05.183946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.521 [2024-04-24 21:35:05.184349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.521 [2024-04-24 21:35:05.184401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.521 [2024-04-24 21:35:05.184418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.522 [2024-04-24 21:35:05.184672] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.522 [2024-04-24 21:35:05.184914] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.522 [2024-04-24 21:35:05.184939] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.522 [2024-04-24 21:35:05.184954] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.522 [2024-04-24 21:35:05.188490] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.781 [2024-04-24 21:35:05.197470] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.781 [2024-04-24 21:35:05.197921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.781 [2024-04-24 21:35:05.198149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.781 [2024-04-24 21:35:05.198197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.781 [2024-04-24 21:35:05.198215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.781 [2024-04-24 21:35:05.198457] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.781 [2024-04-24 21:35:05.198720] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.781 [2024-04-24 21:35:05.198747] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.781 [2024-04-24 21:35:05.198764] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.781 [2024-04-24 21:35:05.202293] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.781 [2024-04-24 21:35:05.211292] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.781 [2024-04-24 21:35:05.211766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.781 [2024-04-24 21:35:05.212031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.781 [2024-04-24 21:35:05.212087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.781 [2024-04-24 21:35:05.212105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.781 [2024-04-24 21:35:05.212342] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.781 [2024-04-24 21:35:05.212584] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.781 [2024-04-24 21:35:05.212609] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.782 [2024-04-24 21:35:05.212625] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.782 [2024-04-24 21:35:05.216179] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.782 [2024-04-24 21:35:05.225179] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.782 [2024-04-24 21:35:05.225620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.782 [2024-04-24 21:35:05.225841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.782 [2024-04-24 21:35:05.225870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.782 [2024-04-24 21:35:05.225888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.782 [2024-04-24 21:35:05.226125] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.782 [2024-04-24 21:35:05.226365] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.782 [2024-04-24 21:35:05.226390] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.782 [2024-04-24 21:35:05.226406] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.782 [2024-04-24 21:35:05.229962] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.782 [2024-04-24 21:35:05.239156] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.782 [2024-04-24 21:35:05.239604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.782 [2024-04-24 21:35:05.239848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.782 [2024-04-24 21:35:05.239877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.782 [2024-04-24 21:35:05.239895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.782 [2024-04-24 21:35:05.240132] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.782 [2024-04-24 21:35:05.240380] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.782 [2024-04-24 21:35:05.240406] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.782 [2024-04-24 21:35:05.240422] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.782 [2024-04-24 21:35:05.243971] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.782 [2024-04-24 21:35:05.253154] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.782 [2024-04-24 21:35:05.253620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.782 [2024-04-24 21:35:05.253841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.782 [2024-04-24 21:35:05.253870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.782 [2024-04-24 21:35:05.253888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.782 [2024-04-24 21:35:05.254125] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.782 [2024-04-24 21:35:05.254367] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.782 [2024-04-24 21:35:05.254392] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.782 [2024-04-24 21:35:05.254409] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.782 [2024-04-24 21:35:05.257954] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.782 [2024-04-24 21:35:05.267134] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.782 [2024-04-24 21:35:05.267598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.782 [2024-04-24 21:35:05.267819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.782 [2024-04-24 21:35:05.267849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.782 [2024-04-24 21:35:05.267866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.782 [2024-04-24 21:35:05.268102] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.782 [2024-04-24 21:35:05.268344] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.782 [2024-04-24 21:35:05.268369] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.782 [2024-04-24 21:35:05.268384] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.782 [2024-04-24 21:35:05.271932] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.782 [2024-04-24 21:35:05.281115] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.782 [2024-04-24 21:35:05.281554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.782 [2024-04-24 21:35:05.281785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.782 [2024-04-24 21:35:05.281833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.782 [2024-04-24 21:35:05.281851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.782 [2024-04-24 21:35:05.282089] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.782 [2024-04-24 21:35:05.282330] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.782 [2024-04-24 21:35:05.282356] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.782 [2024-04-24 21:35:05.282377] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.782 [2024-04-24 21:35:05.285926] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.782 [2024-04-24 21:35:05.294919] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.782 [2024-04-24 21:35:05.295386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.782 [2024-04-24 21:35:05.295637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.782 [2024-04-24 21:35:05.295673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.782 [2024-04-24 21:35:05.295692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.782 [2024-04-24 21:35:05.295929] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.782 [2024-04-24 21:35:05.296170] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.782 [2024-04-24 21:35:05.296195] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.782 [2024-04-24 21:35:05.296211] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.782 [2024-04-24 21:35:05.299758] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.782 [2024-04-24 21:35:05.308751] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.782 [2024-04-24 21:35:05.309218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.782 [2024-04-24 21:35:05.309463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.782 [2024-04-24 21:35:05.309492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.782 [2024-04-24 21:35:05.309510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.782 [2024-04-24 21:35:05.309766] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.782 [2024-04-24 21:35:05.310009] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.782 [2024-04-24 21:35:05.310035] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.782 [2024-04-24 21:35:05.310051] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.782 [2024-04-24 21:35:05.313582] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.782 [2024-04-24 21:35:05.322562] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.782 [2024-04-24 21:35:05.323043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.782 [2024-04-24 21:35:05.323442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.782 [2024-04-24 21:35:05.323494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.782 [2024-04-24 21:35:05.323512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.783 [2024-04-24 21:35:05.323768] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.783 [2024-04-24 21:35:05.324012] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.783 [2024-04-24 21:35:05.324038] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.783 [2024-04-24 21:35:05.324060] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.783 [2024-04-24 21:35:05.327596] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.783 [2024-04-24 21:35:05.336389] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.783 [2024-04-24 21:35:05.336836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.783 [2024-04-24 21:35:05.337016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.783 [2024-04-24 21:35:05.337045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.783 [2024-04-24 21:35:05.337063] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.783 [2024-04-24 21:35:05.337299] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.783 [2024-04-24 21:35:05.337540] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.783 [2024-04-24 21:35:05.337564] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.783 [2024-04-24 21:35:05.337580] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.783 [2024-04-24 21:35:05.341131] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.783 [2024-04-24 21:35:05.350340] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.783 [2024-04-24 21:35:05.350781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.783 [2024-04-24 21:35:05.351025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.783 [2024-04-24 21:35:05.351053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.783 [2024-04-24 21:35:05.351071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.783 [2024-04-24 21:35:05.351307] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.783 [2024-04-24 21:35:05.351549] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.783 [2024-04-24 21:35:05.351573] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.783 [2024-04-24 21:35:05.351589] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.783 [2024-04-24 21:35:05.355136] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.783 [2024-04-24 21:35:05.364332] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.783 [2024-04-24 21:35:05.364775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.783 [2024-04-24 21:35:05.364989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.783 [2024-04-24 21:35:05.365018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.783 [2024-04-24 21:35:05.365036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.783 [2024-04-24 21:35:05.365271] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.783 [2024-04-24 21:35:05.365512] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.783 [2024-04-24 21:35:05.365537] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.783 [2024-04-24 21:35:05.365553] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.783 [2024-04-24 21:35:05.369094] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.783 [2024-04-24 21:35:05.378290] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.783 [2024-04-24 21:35:05.378737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.783 [2024-04-24 21:35:05.378946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.783 [2024-04-24 21:35:05.378975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.783 [2024-04-24 21:35:05.378992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.783 [2024-04-24 21:35:05.379229] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.783 [2024-04-24 21:35:05.379470] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.783 [2024-04-24 21:35:05.379495] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.783 [2024-04-24 21:35:05.379511] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.783 [2024-04-24 21:35:05.383052] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.783 [2024-04-24 21:35:05.392227] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.783 [2024-04-24 21:35:05.392698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.783 [2024-04-24 21:35:05.393062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.783 [2024-04-24 21:35:05.393111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.783 [2024-04-24 21:35:05.393128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.783 [2024-04-24 21:35:05.393363] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.783 [2024-04-24 21:35:05.393605] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.783 [2024-04-24 21:35:05.393642] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.783 [2024-04-24 21:35:05.393667] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.783 [2024-04-24 21:35:05.397195] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.783 [2024-04-24 21:35:05.406157] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.783 [2024-04-24 21:35:05.406639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.783 [2024-04-24 21:35:05.406873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.783 [2024-04-24 21:35:05.406901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.783 [2024-04-24 21:35:05.406919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.783 [2024-04-24 21:35:05.407154] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.783 [2024-04-24 21:35:05.407396] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.783 [2024-04-24 21:35:05.407420] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.783 [2024-04-24 21:35:05.407435] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.783 [2024-04-24 21:35:05.410978] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.783 [2024-04-24 21:35:05.420150] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.783 [2024-04-24 21:35:05.420626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.783 [2024-04-24 21:35:05.420869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.783 [2024-04-24 21:35:05.420897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.783 [2024-04-24 21:35:05.420915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.783 [2024-04-24 21:35:05.421150] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.783 [2024-04-24 21:35:05.421391] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.783 [2024-04-24 21:35:05.421416] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.783 [2024-04-24 21:35:05.421431] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.783 [2024-04-24 21:35:05.424974] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.783 [2024-04-24 21:35:05.434146] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.783 [2024-04-24 21:35:05.434581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.783 [2024-04-24 21:35:05.434829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.783 [2024-04-24 21:35:05.434859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.783 [2024-04-24 21:35:05.434877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.783 [2024-04-24 21:35:05.435113] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.783 [2024-04-24 21:35:05.435354] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.783 [2024-04-24 21:35:05.435379] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.783 [2024-04-24 21:35:05.435394] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.783 [2024-04-24 21:35:05.438936] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.783 [2024-04-24 21:35:05.448113] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.784 [2024-04-24 21:35:05.448588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.784 [2024-04-24 21:35:05.448815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.784 [2024-04-24 21:35:05.448845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:39.784 [2024-04-24 21:35:05.448863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:39.784 [2024-04-24 21:35:05.449099] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:39.784 [2024-04-24 21:35:05.449340] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.784 [2024-04-24 21:35:05.449364] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.784 [2024-04-24 21:35:05.449380] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.784 [2024-04-24 21:35:05.452922] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.042 [2024-04-24 21:35:05.462129] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.042 [2024-04-24 21:35:05.462596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.042 [2024-04-24 21:35:05.462804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.042 [2024-04-24 21:35:05.462835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.042 [2024-04-24 21:35:05.462854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.042 [2024-04-24 21:35:05.463090] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.042 [2024-04-24 21:35:05.463331] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.042 [2024-04-24 21:35:05.463355] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.042 [2024-04-24 21:35:05.463371] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.042 [2024-04-24 21:35:05.466925] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.042 [2024-04-24 21:35:05.476113] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.042 [2024-04-24 21:35:05.476554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.042 [2024-04-24 21:35:05.476768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.042 [2024-04-24 21:35:05.476799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.042 [2024-04-24 21:35:05.476817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.042 [2024-04-24 21:35:05.477053] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.042 [2024-04-24 21:35:05.477293] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.042 [2024-04-24 21:35:05.477317] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.042 [2024-04-24 21:35:05.477333] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.042 [2024-04-24 21:35:05.480875] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.042 [2024-04-24 21:35:05.490069] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.042 [2024-04-24 21:35:05.490685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.042 [2024-04-24 21:35:05.490990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.042 [2024-04-24 21:35:05.491050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.042 [2024-04-24 21:35:05.491068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.042 [2024-04-24 21:35:05.491304] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.042 [2024-04-24 21:35:05.491545] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.042 [2024-04-24 21:35:05.491569] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.042 [2024-04-24 21:35:05.491585] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.042 [2024-04-24 21:35:05.495129] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.042 [2024-04-24 21:35:05.503886] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.042 [2024-04-24 21:35:05.504363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.042 [2024-04-24 21:35:05.504596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.042 [2024-04-24 21:35:05.504624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.042 [2024-04-24 21:35:05.504666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.042 [2024-04-24 21:35:05.504906] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.042 [2024-04-24 21:35:05.505147] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.042 [2024-04-24 21:35:05.505172] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.042 [2024-04-24 21:35:05.505188] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.042 [2024-04-24 21:35:05.508731] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.042 [2024-04-24 21:35:05.517699] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.042 [2024-04-24 21:35:05.518142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.042 [2024-04-24 21:35:05.518402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.042 [2024-04-24 21:35:05.518430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.042 [2024-04-24 21:35:05.518448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.042 [2024-04-24 21:35:05.518704] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.042 [2024-04-24 21:35:05.518947] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.042 [2024-04-24 21:35:05.518972] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.042 [2024-04-24 21:35:05.518988] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.042 [2024-04-24 21:35:05.522516] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.042 [2024-04-24 21:35:05.531600] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.042 [2024-04-24 21:35:05.532050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.042 [2024-04-24 21:35:05.532377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.042 [2024-04-24 21:35:05.532437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.042 [2024-04-24 21:35:05.532454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.042 [2024-04-24 21:35:05.532710] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.042 [2024-04-24 21:35:05.532953] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.043 [2024-04-24 21:35:05.532978] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.043 [2024-04-24 21:35:05.532994] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.043 [2024-04-24 21:35:05.536517] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.043 [2024-04-24 21:35:05.545502] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.043 [2024-04-24 21:35:05.545976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.546231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.546279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.043 [2024-04-24 21:35:05.546303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.043 [2024-04-24 21:35:05.546540] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.043 [2024-04-24 21:35:05.546799] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.043 [2024-04-24 21:35:05.546826] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.043 [2024-04-24 21:35:05.546842] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.043 [2024-04-24 21:35:05.550368] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.043 [2024-04-24 21:35:05.559331] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.043 [2024-04-24 21:35:05.559772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.560052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.560080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.043 [2024-04-24 21:35:05.560098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.043 [2024-04-24 21:35:05.560334] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.043 [2024-04-24 21:35:05.560576] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.043 [2024-04-24 21:35:05.560600] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.043 [2024-04-24 21:35:05.560615] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.043 [2024-04-24 21:35:05.564157] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.043 [2024-04-24 21:35:05.573122] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.043 [2024-04-24 21:35:05.573582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.573809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.573839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.043 [2024-04-24 21:35:05.573858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.043 [2024-04-24 21:35:05.574093] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.043 [2024-04-24 21:35:05.574334] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.043 [2024-04-24 21:35:05.574359] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.043 [2024-04-24 21:35:05.574374] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.043 [2024-04-24 21:35:05.577914] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.043 [2024-04-24 21:35:05.587088] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.043 [2024-04-24 21:35:05.587600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.587827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.587856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.043 [2024-04-24 21:35:05.587875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.043 [2024-04-24 21:35:05.588116] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.043 [2024-04-24 21:35:05.588358] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.043 [2024-04-24 21:35:05.588383] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.043 [2024-04-24 21:35:05.588398] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.043 [2024-04-24 21:35:05.591940] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.043 [2024-04-24 21:35:05.600902] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.043 [2024-04-24 21:35:05.601545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.601797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.601828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.043 [2024-04-24 21:35:05.601846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.043 [2024-04-24 21:35:05.602083] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.043 [2024-04-24 21:35:05.602325] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.043 [2024-04-24 21:35:05.602349] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.043 [2024-04-24 21:35:05.602364] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.043 [2024-04-24 21:35:05.605916] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.043 [2024-04-24 21:35:05.614900] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.043 [2024-04-24 21:35:05.615387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.615591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.615621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.043 [2024-04-24 21:35:05.615656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.043 [2024-04-24 21:35:05.615897] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.043 [2024-04-24 21:35:05.616139] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.043 [2024-04-24 21:35:05.616163] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.043 [2024-04-24 21:35:05.616179] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.043 [2024-04-24 21:35:05.619719] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.043 [2024-04-24 21:35:05.628876] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.043 [2024-04-24 21:35:05.629348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.629561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.629589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.043 [2024-04-24 21:35:05.629607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.043 [2024-04-24 21:35:05.629857] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.043 [2024-04-24 21:35:05.630107] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.043 [2024-04-24 21:35:05.630132] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.043 [2024-04-24 21:35:05.630147] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.043 [2024-04-24 21:35:05.633687] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.043 [2024-04-24 21:35:05.642846] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.043 [2024-04-24 21:35:05.643295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.643534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.643585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.043 [2024-04-24 21:35:05.643604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.043 [2024-04-24 21:35:05.643855] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.043 [2024-04-24 21:35:05.644099] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.043 [2024-04-24 21:35:05.644123] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.043 [2024-04-24 21:35:05.644139] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.043 [2024-04-24 21:35:05.647681] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.043 [2024-04-24 21:35:05.656838] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.043 [2024-04-24 21:35:05.657286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.657551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.657579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.043 [2024-04-24 21:35:05.657597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.043 [2024-04-24 21:35:05.657849] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.043 [2024-04-24 21:35:05.658091] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.043 [2024-04-24 21:35:05.658116] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.043 [2024-04-24 21:35:05.658131] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.043 [2024-04-24 21:35:05.661675] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.043 [2024-04-24 21:35:05.670625] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.043 [2024-04-24 21:35:05.671096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.671424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.671474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.043 [2024-04-24 21:35:05.671491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.043 [2024-04-24 21:35:05.671746] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.043 [2024-04-24 21:35:05.671988] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.043 [2024-04-24 21:35:05.672018] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.043 [2024-04-24 21:35:05.672035] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.043 [2024-04-24 21:35:05.675562] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.043 [2024-04-24 21:35:05.684530] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.043 [2024-04-24 21:35:05.684983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.685277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.685306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.043 [2024-04-24 21:35:05.685324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.043 [2024-04-24 21:35:05.685559] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.043 [2024-04-24 21:35:05.685817] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.043 [2024-04-24 21:35:05.685844] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.043 [2024-04-24 21:35:05.685860] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.043 [2024-04-24 21:35:05.689391] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.043 [2024-04-24 21:35:05.698348] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.043 [2024-04-24 21:35:05.698816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.699025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.699054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.043 [2024-04-24 21:35:05.699072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.043 [2024-04-24 21:35:05.699308] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.043 [2024-04-24 21:35:05.699549] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.043 [2024-04-24 21:35:05.699573] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.043 [2024-04-24 21:35:05.699588] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.043 [2024-04-24 21:35:05.703131] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.043 [2024-04-24 21:35:05.712299] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.043 [2024-04-24 21:35:05.712736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.712935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.043 [2024-04-24 21:35:05.712963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.043 [2024-04-24 21:35:05.712981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.043 [2024-04-24 21:35:05.713217] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.043 [2024-04-24 21:35:05.713459] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.043 [2024-04-24 21:35:05.713483] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.044 [2024-04-24 21:35:05.713504] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.044 [2024-04-24 21:35:05.717050] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.302 [2024-04-24 21:35:05.726243] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.302 [2024-04-24 21:35:05.726702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-24 21:35:05.726887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.302 [2024-04-24 21:35:05.726918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.303 [2024-04-24 21:35:05.726936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.303 [2024-04-24 21:35:05.727172] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.303 [2024-04-24 21:35:05.727414] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.303 [2024-04-24 21:35:05.727438] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.303 [2024-04-24 21:35:05.727453] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.303 [2024-04-24 21:35:05.731000] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.303 [2024-04-24 21:35:05.740183] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.303 [2024-04-24 21:35:05.740626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-24 21:35:05.740872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-24 21:35:05.740901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.303 [2024-04-24 21:35:05.740919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.303 [2024-04-24 21:35:05.741155] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.303 [2024-04-24 21:35:05.741396] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.303 [2024-04-24 21:35:05.741421] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.303 [2024-04-24 21:35:05.741436] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.303 [2024-04-24 21:35:05.744979] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.303 [2024-04-24 21:35:05.754153] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.303 [2024-04-24 21:35:05.754738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-24 21:35:05.754945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-24 21:35:05.754974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.303 [2024-04-24 21:35:05.754992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.303 [2024-04-24 21:35:05.755228] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.303 [2024-04-24 21:35:05.755469] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.303 [2024-04-24 21:35:05.755493] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.303 [2024-04-24 21:35:05.755509] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.303 [2024-04-24 21:35:05.759052] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.303 [2024-04-24 21:35:05.768022] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.303 [2024-04-24 21:35:05.768496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-24 21:35:05.768714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-24 21:35:05.768747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.303 [2024-04-24 21:35:05.768765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.303 [2024-04-24 21:35:05.769003] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.303 [2024-04-24 21:35:05.769245] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.303 [2024-04-24 21:35:05.769270] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.303 [2024-04-24 21:35:05.769285] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.303 [2024-04-24 21:35:05.772833] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.303 [2024-04-24 21:35:05.782011] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.303 [2024-04-24 21:35:05.782449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-24 21:35:05.782689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-24 21:35:05.782720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.303 [2024-04-24 21:35:05.782738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.303 [2024-04-24 21:35:05.782974] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.303 [2024-04-24 21:35:05.783216] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.303 [2024-04-24 21:35:05.783240] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.303 [2024-04-24 21:35:05.783256] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.303 [2024-04-24 21:35:05.786801] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.303 [2024-04-24 21:35:05.795978] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.303 [2024-04-24 21:35:05.796446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-24 21:35:05.796681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-24 21:35:05.796712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.303 [2024-04-24 21:35:05.796730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.303 [2024-04-24 21:35:05.796967] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.303 [2024-04-24 21:35:05.797208] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.303 [2024-04-24 21:35:05.797232] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.303 [2024-04-24 21:35:05.797248] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.303 [2024-04-24 21:35:05.800789] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.303 [2024-04-24 21:35:05.809903] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.303 [2024-04-24 21:35:05.810378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-24 21:35:05.810616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-24 21:35:05.810660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.303 [2024-04-24 21:35:05.810682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.303 [2024-04-24 21:35:05.810918] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.303 [2024-04-24 21:35:05.811160] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.303 [2024-04-24 21:35:05.811185] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.303 [2024-04-24 21:35:05.811200] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.303 [2024-04-24 21:35:05.814739] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.303 [2024-04-24 21:35:05.823706] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.303 [2024-04-24 21:35:05.824194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-24 21:35:05.824577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-24 21:35:05.824636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.303 [2024-04-24 21:35:05.824663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.303 [2024-04-24 21:35:05.824902] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.303 [2024-04-24 21:35:05.825143] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.303 [2024-04-24 21:35:05.825167] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.303 [2024-04-24 21:35:05.825182] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.303 [2024-04-24 21:35:05.828723] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.303 [2024-04-24 21:35:05.837691] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.303 [2024-04-24 21:35:05.838128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-24 21:35:05.838404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-24 21:35:05.838432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.303 [2024-04-24 21:35:05.838450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.303 [2024-04-24 21:35:05.838704] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.303 [2024-04-24 21:35:05.838948] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.303 [2024-04-24 21:35:05.838972] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.303 [2024-04-24 21:35:05.838987] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.303 [2024-04-24 21:35:05.842521] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.303 [2024-04-24 21:35:05.851483] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.303 [2024-04-24 21:35:05.851931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-24 21:35:05.852137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.303 [2024-04-24 21:35:05.852167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.303 [2024-04-24 21:35:05.852185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.303 [2024-04-24 21:35:05.852422] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.303 [2024-04-24 21:35:05.852683] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.304 [2024-04-24 21:35:05.852709] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.304 [2024-04-24 21:35:05.852724] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.304 [2024-04-24 21:35:05.856253] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.304 [2024-04-24 21:35:05.865453] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.304 [2024-04-24 21:35:05.865924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-24 21:35:05.866179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-24 21:35:05.866208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.304 [2024-04-24 21:35:05.866226] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.304 [2024-04-24 21:35:05.866462] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.304 [2024-04-24 21:35:05.866713] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.304 [2024-04-24 21:35:05.866738] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.304 [2024-04-24 21:35:05.866753] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.304 [2024-04-24 21:35:05.870289] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.304 [2024-04-24 21:35:05.879265] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.304 [2024-04-24 21:35:05.879729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-24 21:35:05.879913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-24 21:35:05.879943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.304 [2024-04-24 21:35:05.879962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.304 [2024-04-24 21:35:05.880199] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.304 [2024-04-24 21:35:05.880441] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.304 [2024-04-24 21:35:05.880465] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.304 [2024-04-24 21:35:05.880481] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.304 [2024-04-24 21:35:05.884030] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.304 [2024-04-24 21:35:05.893231] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.304 [2024-04-24 21:35:05.893699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-24 21:35:05.893912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-24 21:35:05.893946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.304 [2024-04-24 21:35:05.893965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.304 [2024-04-24 21:35:05.894201] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.304 [2024-04-24 21:35:05.894442] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.304 [2024-04-24 21:35:05.894466] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.304 [2024-04-24 21:35:05.894482] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.304 [2024-04-24 21:35:05.898031] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.304 [2024-04-24 21:35:05.907236] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.304 [2024-04-24 21:35:05.907699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-24 21:35:05.908470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-24 21:35:05.908505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.304 [2024-04-24 21:35:05.908524] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.304 [2024-04-24 21:35:05.908774] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.304 [2024-04-24 21:35:05.909016] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.304 [2024-04-24 21:35:05.909042] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.304 [2024-04-24 21:35:05.909057] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.304 [2024-04-24 21:35:05.912598] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.304 [2024-04-24 21:35:05.921167] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.304 [2024-04-24 21:35:05.921615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-24 21:35:05.921837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-24 21:35:05.921866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.304 [2024-04-24 21:35:05.921885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.304 [2024-04-24 21:35:05.922121] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.304 [2024-04-24 21:35:05.922361] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.304 [2024-04-24 21:35:05.922385] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.304 [2024-04-24 21:35:05.922401] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.304 [2024-04-24 21:35:05.925967] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.304 [2024-04-24 21:35:05.935165] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.304 [2024-04-24 21:35:05.935634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-24 21:35:05.935851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-24 21:35:05.935880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.304 [2024-04-24 21:35:05.935904] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.304 [2024-04-24 21:35:05.936141] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.304 [2024-04-24 21:35:05.936382] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.304 [2024-04-24 21:35:05.936406] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.304 [2024-04-24 21:35:05.936421] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.304 [2024-04-24 21:35:05.939968] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.304 [2024-04-24 21:35:05.949051] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.304 [2024-04-24 21:35:05.949525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-24 21:35:05.949752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-24 21:35:05.949780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.304 [2024-04-24 21:35:05.949797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.304 [2024-04-24 21:35:05.950042] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.304 [2024-04-24 21:35:05.950285] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.304 [2024-04-24 21:35:05.950309] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.304 [2024-04-24 21:35:05.950325] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.304 [2024-04-24 21:35:05.953933] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.304 [2024-04-24 21:35:05.963089] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.304 [2024-04-24 21:35:05.963694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-24 21:35:05.963891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-24 21:35:05.963918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.304 [2024-04-24 21:35:05.963951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.304 [2024-04-24 21:35:05.964189] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.304 [2024-04-24 21:35:05.964430] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.304 [2024-04-24 21:35:05.964455] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.304 [2024-04-24 21:35:05.964471] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.304 [2024-04-24 21:35:05.968059] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.304 [2024-04-24 21:35:05.977072] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.304 [2024-04-24 21:35:05.977546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-24 21:35:05.977777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.304 [2024-04-24 21:35:05.977804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.304 [2024-04-24 21:35:05.977821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.304 [2024-04-24 21:35:05.978078] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.304 [2024-04-24 21:35:05.978320] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.304 [2024-04-24 21:35:05.978344] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.304 [2024-04-24 21:35:05.978360] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.564 [2024-04-24 21:35:05.981893] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.564 [2024-04-24 21:35:05.991005] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.564 [2024-04-24 21:35:05.991443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.564 [2024-04-24 21:35:05.991652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.564 [2024-04-24 21:35:05.991697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.564 [2024-04-24 21:35:05.991714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.564 [2024-04-24 21:35:05.991941] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.564 [2024-04-24 21:35:05.992155] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.564 [2024-04-24 21:35:05.992176] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.564 [2024-04-24 21:35:05.992189] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.564 [2024-04-24 21:35:05.995744] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.564 [2024-04-24 21:35:06.004966] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.564 [2024-04-24 21:35:06.005450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.564 [2024-04-24 21:35:06.005675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.564 [2024-04-24 21:35:06.005702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.564 [2024-04-24 21:35:06.005718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.564 [2024-04-24 21:35:06.005966] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.564 [2024-04-24 21:35:06.006208] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.564 [2024-04-24 21:35:06.006232] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.564 [2024-04-24 21:35:06.006248] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.564 [2024-04-24 21:35:06.009800] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.564 [2024-04-24 21:35:06.018988] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.564 [2024-04-24 21:35:06.019526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.564 [2024-04-24 21:35:06.019739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.564 [2024-04-24 21:35:06.019769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.564 [2024-04-24 21:35:06.019787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.564 [2024-04-24 21:35:06.020022] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.564 [2024-04-24 21:35:06.020269] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.564 [2024-04-24 21:35:06.020294] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.564 [2024-04-24 21:35:06.020311] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.564 [2024-04-24 21:35:06.023863] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.564 [2024-04-24 21:35:06.032859] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.564 [2024-04-24 21:35:06.033323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.564 [2024-04-24 21:35:06.033577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.564 [2024-04-24 21:35:06.033608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.564 [2024-04-24 21:35:06.033626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.564 [2024-04-24 21:35:06.033873] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.564 [2024-04-24 21:35:06.034114] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.564 [2024-04-24 21:35:06.034139] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.564 [2024-04-24 21:35:06.034155] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.564 [2024-04-24 21:35:06.037702] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.564 [2024-04-24 21:35:06.046679] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.564 [2024-04-24 21:35:06.047144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.564 [2024-04-24 21:35:06.047422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.564 [2024-04-24 21:35:06.047469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.564 [2024-04-24 21:35:06.047488] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.564 [2024-04-24 21:35:06.047742] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.564 [2024-04-24 21:35:06.047985] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.564 [2024-04-24 21:35:06.048010] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.564 [2024-04-24 21:35:06.048025] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.564 [2024-04-24 21:35:06.051551] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.564 [2024-04-24 21:35:06.060532] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.564 [2024-04-24 21:35:06.061015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.564 [2024-04-24 21:35:06.061230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.564 [2024-04-24 21:35:06.061259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.564 [2024-04-24 21:35:06.061277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.564 [2024-04-24 21:35:06.061513] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.564 [2024-04-24 21:35:06.061775] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.564 [2024-04-24 21:35:06.061808] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.564 [2024-04-24 21:35:06.061824] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.564 [2024-04-24 21:35:06.065350] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.564 [2024-04-24 21:35:06.074527] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.564 [2024-04-24 21:35:06.074999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.564 [2024-04-24 21:35:06.075241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.565 [2024-04-24 21:35:06.075287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.565 [2024-04-24 21:35:06.075305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.565 [2024-04-24 21:35:06.075542] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.565 [2024-04-24 21:35:06.075801] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.565 [2024-04-24 21:35:06.075827] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.565 [2024-04-24 21:35:06.075843] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.565 [2024-04-24 21:35:06.079368] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.565 [2024-04-24 21:35:06.088338] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.565 [2024-04-24 21:35:06.088807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.565 [2024-04-24 21:35:06.089048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.565 [2024-04-24 21:35:06.089094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.565 [2024-04-24 21:35:06.089112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.565 [2024-04-24 21:35:06.089347] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.565 [2024-04-24 21:35:06.089589] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.565 [2024-04-24 21:35:06.089613] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.565 [2024-04-24 21:35:06.089639] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.565 [2024-04-24 21:35:06.093176] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.565 [2024-04-24 21:35:06.102143] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.565 [2024-04-24 21:35:06.102617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.565 [2024-04-24 21:35:06.102839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.565 [2024-04-24 21:35:06.102869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.565 [2024-04-24 21:35:06.102887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.565 [2024-04-24 21:35:06.103123] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.565 [2024-04-24 21:35:06.103364] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.565 [2024-04-24 21:35:06.103388] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.565 [2024-04-24 21:35:06.103409] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.565 [2024-04-24 21:35:06.106954] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.565 [2024-04-24 21:35:06.116125] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.565 [2024-04-24 21:35:06.116570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.565 [2024-04-24 21:35:06.116764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.565 [2024-04-24 21:35:06.116797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.565 [2024-04-24 21:35:06.116815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.565 [2024-04-24 21:35:06.117052] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.565 [2024-04-24 21:35:06.117294] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.565 [2024-04-24 21:35:06.117318] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.565 [2024-04-24 21:35:06.117333] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.565 [2024-04-24 21:35:06.120876] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.565 [2024-04-24 21:35:06.130082] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.565 [2024-04-24 21:35:06.130528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.565 [2024-04-24 21:35:06.130764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.565 [2024-04-24 21:35:06.130795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.565 [2024-04-24 21:35:06.130814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.565 [2024-04-24 21:35:06.131050] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.565 [2024-04-24 21:35:06.131292] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.565 [2024-04-24 21:35:06.131316] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.565 [2024-04-24 21:35:06.131331] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.565 [2024-04-24 21:35:06.134878] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.565 [2024-04-24 21:35:06.144069] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.565 [2024-04-24 21:35:06.144541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.565 [2024-04-24 21:35:06.144751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.565 [2024-04-24 21:35:06.144782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.565 [2024-04-24 21:35:06.144800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.565 [2024-04-24 21:35:06.145037] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.565 [2024-04-24 21:35:06.145279] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.565 [2024-04-24 21:35:06.145303] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.565 [2024-04-24 21:35:06.145318] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.565 [2024-04-24 21:35:06.148869] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.565 [2024-04-24 21:35:06.158040] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.565 [2024-04-24 21:35:06.158508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.565 [2024-04-24 21:35:06.158712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.565 [2024-04-24 21:35:06.158760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.565 [2024-04-24 21:35:06.158779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.565 [2024-04-24 21:35:06.159015] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.565 [2024-04-24 21:35:06.159256] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.565 [2024-04-24 21:35:06.159280] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.565 [2024-04-24 21:35:06.159296] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.565 [2024-04-24 21:35:06.162842] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.565 [2024-04-24 21:35:06.172016] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.565 [2024-04-24 21:35:06.172479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.565 [2024-04-24 21:35:06.172743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.565 [2024-04-24 21:35:06.172779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.565 [2024-04-24 21:35:06.172813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.565 [2024-04-24 21:35:06.173051] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.565 [2024-04-24 21:35:06.173292] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.565 [2024-04-24 21:35:06.173316] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.565 [2024-04-24 21:35:06.173331] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.565 [2024-04-24 21:35:06.176883] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.565 [2024-04-24 21:35:06.185854] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.565 [2024-04-24 21:35:06.186328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.565 [2024-04-24 21:35:06.186516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.565 [2024-04-24 21:35:06.186546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.565 [2024-04-24 21:35:06.186565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.565 [2024-04-24 21:35:06.186820] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.565 [2024-04-24 21:35:06.187064] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.565 [2024-04-24 21:35:06.187088] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.565 [2024-04-24 21:35:06.187103] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.565 [2024-04-24 21:35:06.190678] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.565 [2024-04-24 21:35:06.199681] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.565 [2024-04-24 21:35:06.200168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.565 [2024-04-24 21:35:06.200379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.565 [2024-04-24 21:35:06.200409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.565 [2024-04-24 21:35:06.200428] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.565 [2024-04-24 21:35:06.200686] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.566 [2024-04-24 21:35:06.200929] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.566 [2024-04-24 21:35:06.200962] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.566 [2024-04-24 21:35:06.200978] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.566 [2024-04-24 21:35:06.204507] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.566 [2024-04-24 21:35:06.213519] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.566 [2024-04-24 21:35:06.213971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.566 [2024-04-24 21:35:06.214234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.566 [2024-04-24 21:35:06.214291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.566 [2024-04-24 21:35:06.214309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.566 [2024-04-24 21:35:06.214545] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.566 [2024-04-24 21:35:06.214799] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.566 [2024-04-24 21:35:06.214826] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.566 [2024-04-24 21:35:06.214841] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.566 [2024-04-24 21:35:06.218389] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.566 [2024-04-24 21:35:06.227385] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.566 [2024-04-24 21:35:06.227862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.566 [2024-04-24 21:35:06.228074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.566 [2024-04-24 21:35:06.228102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.566 [2024-04-24 21:35:06.228120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.566 [2024-04-24 21:35:06.228356] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.566 [2024-04-24 21:35:06.228598] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.566 [2024-04-24 21:35:06.228624] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.566 [2024-04-24 21:35:06.228652] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.566 [2024-04-24 21:35:06.232197] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.826 [2024-04-24 21:35:06.241389] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.826 [2024-04-24 21:35:06.241866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.826 [2024-04-24 21:35:06.242238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.826 [2024-04-24 21:35:06.242297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.826 [2024-04-24 21:35:06.242315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.826 [2024-04-24 21:35:06.242551] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.826 [2024-04-24 21:35:06.242804] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.826 [2024-04-24 21:35:06.242830] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.826 [2024-04-24 21:35:06.242846] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.826 [2024-04-24 21:35:06.246384] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.826 [2024-04-24 21:35:06.255397] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.826 [2024-04-24 21:35:06.255818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.826 [2024-04-24 21:35:06.256113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.826 [2024-04-24 21:35:06.256159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.826 [2024-04-24 21:35:06.256178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.826 [2024-04-24 21:35:06.256415] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.826 [2024-04-24 21:35:06.256670] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.826 [2024-04-24 21:35:06.256697] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.826 [2024-04-24 21:35:06.256713] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.826 [2024-04-24 21:35:06.260251] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.826 [2024-04-24 21:35:06.269242] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.826 [2024-04-24 21:35:06.269750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.826 [2024-04-24 21:35:06.269961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.826 [2024-04-24 21:35:06.269989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.826 [2024-04-24 21:35:06.270007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.826 [2024-04-24 21:35:06.270243] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.826 [2024-04-24 21:35:06.270485] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.826 [2024-04-24 21:35:06.270511] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.826 [2024-04-24 21:35:06.270527] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.826 [2024-04-24 21:35:06.274087] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.826 [2024-04-24 21:35:06.283056] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.826 [2024-04-24 21:35:06.283517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.826 [2024-04-24 21:35:06.283876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.826 [2024-04-24 21:35:06.283932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.826 [2024-04-24 21:35:06.283951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.826 [2024-04-24 21:35:06.284188] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.826 [2024-04-24 21:35:06.284429] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.826 [2024-04-24 21:35:06.284454] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.826 [2024-04-24 21:35:06.284470] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.826 [2024-04-24 21:35:06.288027] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.826 [2024-04-24 21:35:06.297009] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.826 [2024-04-24 21:35:06.297453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.826 [2024-04-24 21:35:06.297691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.826 [2024-04-24 21:35:06.297722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.826 [2024-04-24 21:35:06.297741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.826 [2024-04-24 21:35:06.297978] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.826 [2024-04-24 21:35:06.298220] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.826 [2024-04-24 21:35:06.298245] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.826 [2024-04-24 21:35:06.298261] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.826 [2024-04-24 21:35:06.301808] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.826 [2024-04-24 21:35:06.310985] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.826 [2024-04-24 21:35:06.311423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.826 [2024-04-24 21:35:06.311638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.826 [2024-04-24 21:35:06.311672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.826 [2024-04-24 21:35:06.311692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.826 [2024-04-24 21:35:06.311929] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.826 [2024-04-24 21:35:06.312171] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.826 [2024-04-24 21:35:06.312196] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.826 [2024-04-24 21:35:06.312211] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.826 [2024-04-24 21:35:06.315757] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.826 [2024-04-24 21:35:06.324936] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.826 [2024-04-24 21:35:06.325402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.826 [2024-04-24 21:35:06.325613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.826 [2024-04-24 21:35:06.325658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.826 [2024-04-24 21:35:06.325686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.826 [2024-04-24 21:35:06.325924] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.826 [2024-04-24 21:35:06.326166] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.826 [2024-04-24 21:35:06.326191] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.826 [2024-04-24 21:35:06.326207] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.826 [2024-04-24 21:35:06.329762] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.826 [2024-04-24 21:35:06.338745] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.826 [2024-04-24 21:35:06.339208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.826 [2024-04-24 21:35:06.339417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.826 [2024-04-24 21:35:06.339446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.826 [2024-04-24 21:35:06.339465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.826 [2024-04-24 21:35:06.339721] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.826 [2024-04-24 21:35:06.339963] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.827 [2024-04-24 21:35:06.339989] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.827 [2024-04-24 21:35:06.340005] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.827 [2024-04-24 21:35:06.343535] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.827 [2024-04-24 21:35:06.352725] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.827 [2024-04-24 21:35:06.353193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.827 [2024-04-24 21:35:06.353595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.827 [2024-04-24 21:35:06.353661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.827 [2024-04-24 21:35:06.353682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.827 [2024-04-24 21:35:06.353919] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.827 [2024-04-24 21:35:06.354159] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.827 [2024-04-24 21:35:06.354184] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.827 [2024-04-24 21:35:06.354200] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.827 [2024-04-24 21:35:06.357748] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.827 [2024-04-24 21:35:06.366524] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.827 [2024-04-24 21:35:06.367008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.827 [2024-04-24 21:35:06.367291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.827 [2024-04-24 21:35:06.367342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.827 [2024-04-24 21:35:06.367360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.827 [2024-04-24 21:35:06.367602] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.827 [2024-04-24 21:35:06.367863] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.827 [2024-04-24 21:35:06.367890] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.827 [2024-04-24 21:35:06.367907] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.827 [2024-04-24 21:35:06.371438] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.827 [2024-04-24 21:35:06.380418] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.827 [2024-04-24 21:35:06.380893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.827 [2024-04-24 21:35:06.381104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.827 [2024-04-24 21:35:06.381143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.827 [2024-04-24 21:35:06.381161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.827 [2024-04-24 21:35:06.381397] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.827 [2024-04-24 21:35:06.381657] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.827 [2024-04-24 21:35:06.381684] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.827 [2024-04-24 21:35:06.381701] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.827 [2024-04-24 21:35:06.385230] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.827 [2024-04-24 21:35:06.394406] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.827 [2024-04-24 21:35:06.394859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.827 [2024-04-24 21:35:06.395151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.827 [2024-04-24 21:35:06.395180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.827 [2024-04-24 21:35:06.395198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.827 [2024-04-24 21:35:06.395436] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.827 [2024-04-24 21:35:06.395697] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.827 [2024-04-24 21:35:06.395724] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.827 [2024-04-24 21:35:06.395741] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.827 [2024-04-24 21:35:06.399268] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.827 [2024-04-24 21:35:06.408234] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.827 [2024-04-24 21:35:06.408771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.827 [2024-04-24 21:35:06.409014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.827 [2024-04-24 21:35:06.409043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.827 [2024-04-24 21:35:06.409061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.827 [2024-04-24 21:35:06.409299] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.827 [2024-04-24 21:35:06.409546] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.827 [2024-04-24 21:35:06.409571] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.827 [2024-04-24 21:35:06.409587] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.827 [2024-04-24 21:35:06.413135] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.827 [2024-04-24 21:35:06.422105] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.827 [2024-04-24 21:35:06.422571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.827 [2024-04-24 21:35:06.422801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.827 [2024-04-24 21:35:06.422832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.827 [2024-04-24 21:35:06.422850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.827 [2024-04-24 21:35:06.423087] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.827 [2024-04-24 21:35:06.423330] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.827 [2024-04-24 21:35:06.423355] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.827 [2024-04-24 21:35:06.423370] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.827 [2024-04-24 21:35:06.426921] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.827 [2024-04-24 21:35:06.435894] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.827 [2024-04-24 21:35:06.436372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.827 [2024-04-24 21:35:06.436581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.827 [2024-04-24 21:35:06.436609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.827 [2024-04-24 21:35:06.436639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.827 [2024-04-24 21:35:06.436886] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.827 [2024-04-24 21:35:06.437129] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.827 [2024-04-24 21:35:06.437153] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.827 [2024-04-24 21:35:06.437169] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.827 [2024-04-24 21:35:06.440714] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2677129 Killed "${NVMF_APP[@]}" "$@" 00:20:40.827 21:35:06 -- host/bdevperf.sh@36 -- # tgt_init 00:20:40.827 21:35:06 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:20:40.827 21:35:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:40.827 21:35:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:40.827 21:35:06 -- common/autotest_common.sh@10 -- # set +x 00:20:40.827 21:35:06 -- nvmf/common.sh@470 -- # nvmfpid=2678083 00:20:40.827 21:35:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:40.827 21:35:06 -- nvmf/common.sh@471 -- # waitforlisten 2678083 00:20:40.827 21:35:06 -- common/autotest_common.sh@817 -- # '[' -z 2678083 ']' 00:20:40.827 21:35:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.827 21:35:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:40.827 21:35:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.827 21:35:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:40.827 21:35:06 -- common/autotest_common.sh@10 -- # set +x 00:20:40.827 [2024-04-24 21:35:06.449704] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.827 [2024-04-24 21:35:06.450179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.827 [2024-04-24 21:35:06.450420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.827 [2024-04-24 21:35:06.450448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.827 [2024-04-24 21:35:06.450467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.827 [2024-04-24 21:35:06.450715] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.827 [2024-04-24 21:35:06.450958] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.827 [2024-04-24 21:35:06.450983] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.827 [2024-04-24 21:35:06.451000] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.827 [2024-04-24 21:35:06.454540] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.828 [2024-04-24 21:35:06.463523] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.828 [2024-04-24 21:35:06.464029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.828 [2024-04-24 21:35:06.464269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.828 [2024-04-24 21:35:06.464298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.828 [2024-04-24 21:35:06.464316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.828 [2024-04-24 21:35:06.464551] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.828 [2024-04-24 21:35:06.464804] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.828 [2024-04-24 21:35:06.464829] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.828 [2024-04-24 21:35:06.464845] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.828 [2024-04-24 21:35:06.468382] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.828 [2024-04-24 21:35:06.477365] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.828 [2024-04-24 21:35:06.477820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.828 [2024-04-24 21:35:06.478056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.828 [2024-04-24 21:35:06.478086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.828 [2024-04-24 21:35:06.478104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.828 [2024-04-24 21:35:06.478341] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.828 [2024-04-24 21:35:06.478583] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.828 [2024-04-24 21:35:06.478608] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.828 [2024-04-24 21:35:06.478623] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.828 [2024-04-24 21:35:06.482191] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.828 [2024-04-24 21:35:06.490869] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.828 [2024-04-24 21:35:06.491332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.828 [2024-04-24 21:35:06.491517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.828 [2024-04-24 21:35:06.491544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:40.828 [2024-04-24 21:35:06.491561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:40.828 [2024-04-24 21:35:06.491825] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:40.828 [2024-04-24 21:35:06.492048] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.828 [2024-04-24 21:35:06.492067] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.828 [2024-04-24 21:35:06.492080] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.828 [2024-04-24 21:35:06.495098] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.828 [2024-04-24 21:35:06.495719] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:20:40.828 [2024-04-24 21:35:06.495783] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.088 [2024-04-24 21:35:06.504307] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.088 [2024-04-24 21:35:06.504714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.088 [2024-04-24 21:35:06.504907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.088 [2024-04-24 21:35:06.504934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.088 [2024-04-24 21:35:06.504950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.088 [2024-04-24 21:35:06.505194] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.088 [2024-04-24 21:35:06.505386] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.088 [2024-04-24 21:35:06.505405] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.088 [2024-04-24 21:35:06.505418] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.088 [2024-04-24 21:35:06.508739] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.088 [2024-04-24 21:35:06.517544] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.088 [2024-04-24 21:35:06.518029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.088 [2024-04-24 21:35:06.518216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.088 [2024-04-24 21:35:06.518241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.088 [2024-04-24 21:35:06.518257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.088 [2024-04-24 21:35:06.518501] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.088 [2024-04-24 21:35:06.518726] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.088 [2024-04-24 21:35:06.518753] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.088 [2024-04-24 21:35:06.518767] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.088 [2024-04-24 21:35:06.521618] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.088 [2024-04-24 21:35:06.530771] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.088 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.088 [2024-04-24 21:35:06.531188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.088 [2024-04-24 21:35:06.531431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.088 [2024-04-24 21:35:06.531457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.088 [2024-04-24 21:35:06.531473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.088 [2024-04-24 21:35:06.531728] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.088 [2024-04-24 21:35:06.531926] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.088 [2024-04-24 21:35:06.531960] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.088 [2024-04-24 21:35:06.531973] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.088 [2024-04-24 21:35:06.534961] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.088 [2024-04-24 21:35:06.544731] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.088 [2024-04-24 21:35:06.545191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.088 [2024-04-24 21:35:06.545385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.088 [2024-04-24 21:35:06.545410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.088 [2024-04-24 21:35:06.545426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.089 [2024-04-24 21:35:06.545693] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.089 [2024-04-24 21:35:06.545920] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.089 [2024-04-24 21:35:06.545941] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.089 [2024-04-24 21:35:06.545955] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.089 [2024-04-24 21:35:06.549444] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.089 [2024-04-24 21:35:06.558640] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.089 [2024-04-24 21:35:06.559091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.089 [2024-04-24 21:35:06.559302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.089 [2024-04-24 21:35:06.559330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.089 [2024-04-24 21:35:06.559348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.089 [2024-04-24 21:35:06.559584] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.089 [2024-04-24 21:35:06.559828] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.089 [2024-04-24 21:35:06.559851] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.089 [2024-04-24 21:35:06.559872] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.089 [2024-04-24 21:35:06.563553] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.089 [2024-04-24 21:35:06.566737] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:41.089 [2024-04-24 21:35:06.572548] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.089 [2024-04-24 21:35:06.573108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.089 [2024-04-24 21:35:06.573353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.089 [2024-04-24 21:35:06.573382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.089 [2024-04-24 21:35:06.573402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.089 [2024-04-24 21:35:06.573650] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.089 [2024-04-24 21:35:06.573875] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.089 [2024-04-24 21:35:06.573897] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.089 [2024-04-24 21:35:06.573926] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.089 [2024-04-24 21:35:06.577398] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.089 [2024-04-24 21:35:06.586396] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.089 [2024-04-24 21:35:06.586951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.089 [2024-04-24 21:35:06.587189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.089 [2024-04-24 21:35:06.587219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.089 [2024-04-24 21:35:06.587239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.089 [2024-04-24 21:35:06.587483] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.089 [2024-04-24 21:35:06.587748] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.089 [2024-04-24 21:35:06.587770] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.089 [2024-04-24 21:35:06.587785] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.089 [2024-04-24 21:35:06.591254] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.089 [2024-04-24 21:35:06.600218] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.089 [2024-04-24 21:35:06.600680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.089 [2024-04-24 21:35:06.600853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.089 [2024-04-24 21:35:06.600879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.089 [2024-04-24 21:35:06.600897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.089 [2024-04-24 21:35:06.601149] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.089 [2024-04-24 21:35:06.601391] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.089 [2024-04-24 21:35:06.601417] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.089 [2024-04-24 21:35:06.601444] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.089 [2024-04-24 21:35:06.604884] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.089 [2024-04-24 21:35:06.614054] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.089 [2024-04-24 21:35:06.614532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.089 [2024-04-24 21:35:06.614782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.089 [2024-04-24 21:35:06.614810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.089 [2024-04-24 21:35:06.614827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.089 [2024-04-24 21:35:06.615073] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.089 [2024-04-24 21:35:06.615315] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.089 [2024-04-24 21:35:06.615341] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.089 [2024-04-24 21:35:06.615357] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.089 [2024-04-24 21:35:06.618820] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.089 [2024-04-24 21:35:06.627807] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.089 [2024-04-24 21:35:06.628299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.089 [2024-04-24 21:35:06.628499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.089 [2024-04-24 21:35:06.628530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.089 [2024-04-24 21:35:06.628550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.089 [2024-04-24 21:35:06.628828] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.089 [2024-04-24 21:35:06.629066] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.089 [2024-04-24 21:35:06.629091] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.089 [2024-04-24 21:35:06.629108] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.089 [2024-04-24 21:35:06.632564] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.089 [2024-04-24 21:35:06.641557] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.089 [2024-04-24 21:35:06.642168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.089 [2024-04-24 21:35:06.642437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.089 [2024-04-24 21:35:06.642467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.089 [2024-04-24 21:35:06.642488] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.089 [2024-04-24 21:35:06.642745] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.089 [2024-04-24 21:35:06.642968] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.089 [2024-04-24 21:35:06.643007] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.089 [2024-04-24 21:35:06.643025] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.089 [2024-04-24 21:35:06.646503] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.089 [2024-04-24 21:35:06.655285] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.089 [2024-04-24 21:35:06.655737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.089 [2024-04-24 21:35:06.655970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.089 [2024-04-24 21:35:06.655997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.089 [2024-04-24 21:35:06.656014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.089 [2024-04-24 21:35:06.656271] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.089 [2024-04-24 21:35:06.656514] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.089 [2024-04-24 21:35:06.656539] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.089 [2024-04-24 21:35:06.656555] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.089 [2024-04-24 21:35:06.660006] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.089 [2024-04-24 21:35:06.669166] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.089 [2024-04-24 21:35:06.669646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.089 [2024-04-24 21:35:06.669834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.089 [2024-04-24 21:35:06.669860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.089 [2024-04-24 21:35:06.669876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.089 [2024-04-24 21:35:06.670127] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.089 [2024-04-24 21:35:06.670369] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.089 [2024-04-24 21:35:06.670394] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.090 [2024-04-24 21:35:06.670410] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.090 [2024-04-24 21:35:06.673860] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.090 [2024-04-24 21:35:06.682351] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.090 [2024-04-24 21:35:06.682384] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.090 [2024-04-24 21:35:06.682397] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:41.090 [2024-04-24 21:35:06.682409] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:41.090 [2024-04-24 21:35:06.682419] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.090 [2024-04-24 21:35:06.682486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.090 [2024-04-24 21:35:06.682547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:41.090 [2024-04-24 21:35:06.682550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.090 [2024-04-24 21:35:06.682929] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.090 [2024-04-24 21:35:06.683398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.090 [2024-04-24 21:35:06.683598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.090 [2024-04-24 21:35:06.683624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.090 [2024-04-24 21:35:06.683658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.090 [2024-04-24 21:35:06.683887] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.090 [2024-04-24 21:35:06.684128] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.090 [2024-04-24 21:35:06.684151] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.090 [2024-04-24 21:35:06.684165] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.090 [2024-04-24 21:35:06.687289] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.090 [2024-04-24 21:35:06.696362] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.090 [2024-04-24 21:35:06.696963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.090 [2024-04-24 21:35:06.697175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.090 [2024-04-24 21:35:06.697202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.090 [2024-04-24 21:35:06.697221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.090 [2024-04-24 21:35:06.697471] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.090 [2024-04-24 21:35:06.697708] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.090 [2024-04-24 21:35:06.697731] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.090 [2024-04-24 21:35:06.697747] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.090 [2024-04-24 21:35:06.700857] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.090 [2024-04-24 21:35:06.709867] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.090 [2024-04-24 21:35:06.710567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.090 [2024-04-24 21:35:06.710788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.090 [2024-04-24 21:35:06.710818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.090 [2024-04-24 21:35:06.710839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.090 [2024-04-24 21:35:06.711099] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.090 [2024-04-24 21:35:06.711309] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.090 [2024-04-24 21:35:06.711332] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.090 [2024-04-24 21:35:06.711347] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.090 [2024-04-24 21:35:06.714462] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.090 [2024-04-24 21:35:06.723306] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.090 [2024-04-24 21:35:06.723903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.090 [2024-04-24 21:35:06.724095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.090 [2024-04-24 21:35:06.724123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.090 [2024-04-24 21:35:06.724142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.090 [2024-04-24 21:35:06.724405] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.090 [2024-04-24 21:35:06.724641] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.090 [2024-04-24 21:35:06.724670] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.090 [2024-04-24 21:35:06.724688] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.090 [2024-04-24 21:35:06.727802] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.090 [2024-04-24 21:35:06.736875] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.090 [2024-04-24 21:35:06.737460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.090 [2024-04-24 21:35:06.737656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.090 [2024-04-24 21:35:06.737684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.090 [2024-04-24 21:35:06.737703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.090 [2024-04-24 21:35:06.737924] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.090 [2024-04-24 21:35:06.738167] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.090 [2024-04-24 21:35:06.738190] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.090 [2024-04-24 21:35:06.738205] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.090 [2024-04-24 21:35:06.741397] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.090 [2024-04-24 21:35:06.750375] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.090 [2024-04-24 21:35:06.750957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.090 [2024-04-24 21:35:06.751151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.090 [2024-04-24 21:35:06.751180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.090 [2024-04-24 21:35:06.751200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.090 [2024-04-24 21:35:06.751454] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.090 [2024-04-24 21:35:06.751692] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.090 [2024-04-24 21:35:06.751716] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.090 [2024-04-24 21:35:06.751733] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.090 [2024-04-24 21:35:06.754847] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.351 [2024-04-24 21:35:06.763968] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.351 [2024-04-24 21:35:06.764467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-24 21:35:06.764694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-24 21:35:06.764722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.351 [2024-04-24 21:35:06.764742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.351 [2024-04-24 21:35:06.764979] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.351 [2024-04-24 21:35:06.765216] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.351 [2024-04-24 21:35:06.765238] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.351 [2024-04-24 21:35:06.765253] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.351 [2024-04-24 21:35:06.768363] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.351 [2024-04-24 21:35:06.777481] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.351 [2024-04-24 21:35:06.777896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-24 21:35:06.778096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-24 21:35:06.778121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.351 [2024-04-24 21:35:06.778137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.351 [2024-04-24 21:35:06.778362] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.351 [2024-04-24 21:35:06.778582] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.351 [2024-04-24 21:35:06.778604] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.351 [2024-04-24 21:35:06.778618] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.351 [2024-04-24 21:35:06.781795] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.351 [2024-04-24 21:35:06.791063] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.351 [2024-04-24 21:35:06.791547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-24 21:35:06.791750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-24 21:35:06.791777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.351 [2024-04-24 21:35:06.791792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.351 [2024-04-24 21:35:06.792032] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.351 [2024-04-24 21:35:06.792238] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.351 [2024-04-24 21:35:06.792260] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.351 [2024-04-24 21:35:06.792273] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.351 [2024-04-24 21:35:06.795378] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.351 [2024-04-24 21:35:06.804435] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.351 21:35:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:41.351 21:35:06 -- common/autotest_common.sh@850 -- # return 0 00:20:41.351 [2024-04-24 21:35:06.804868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 21:35:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:41.351 21:35:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:41.351 [2024-04-24 21:35:06.805067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-24 21:35:06.805092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.351 21:35:06 -- common/autotest_common.sh@10 -- # set +x 00:20:41.351 [2024-04-24 21:35:06.805109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.351 [2024-04-24 21:35:06.805329] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.351 [2024-04-24 21:35:06.805556] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.351 [2024-04-24 21:35:06.805580] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.351 [2024-04-24 21:35:06.805594] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.352 [2024-04-24 21:35:06.808710] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.352 [2024-04-24 21:35:06.817882] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.352 [2024-04-24 21:35:06.818358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-24 21:35:06.818559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-24 21:35:06.818585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.352 [2024-04-24 21:35:06.818602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.352 [2024-04-24 21:35:06.818823] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.352 [2024-04-24 21:35:06.819064] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.352 [2024-04-24 21:35:06.819085] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.352 [2024-04-24 21:35:06.819098] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.352 [2024-04-24 21:35:06.822212] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.352 21:35:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.352 21:35:06 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:41.352 21:35:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.352 21:35:06 -- common/autotest_common.sh@10 -- # set +x 00:20:41.352 [2024-04-24 21:35:06.831338] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.352 [2024-04-24 21:35:06.831771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-24 21:35:06.831966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-24 21:35:06.831994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.352 [2024-04-24 21:35:06.832011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.352 [2024-04-24 21:35:06.832250] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.352 [2024-04-24 21:35:06.832456] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.352 [2024-04-24 21:35:06.832478] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.352 [2024-04-24 21:35:06.832492] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.352 [2024-04-24 21:35:06.835283] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.352 [2024-04-24 21:35:06.835600] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.352 21:35:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.352 21:35:06 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:41.352 21:35:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.352 21:35:06 -- common/autotest_common.sh@10 -- # set +x 00:20:41.352 [2024-04-24 21:35:06.844954] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.352 [2024-04-24 21:35:06.845375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-24 21:35:06.845601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-24 21:35:06.845637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.352 [2024-04-24 21:35:06.845656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.352 [2024-04-24 21:35:06.845880] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.352 [2024-04-24 21:35:06.846112] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.352 [2024-04-24 21:35:06.846133] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.352 [2024-04-24 21:35:06.846146] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.352 [2024-04-24 21:35:06.849297] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.352 [2024-04-24 21:35:06.858330] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.352 [2024-04-24 21:35:06.858771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-24 21:35:06.858964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-24 21:35:06.858991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.352 [2024-04-24 21:35:06.859007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.352 [2024-04-24 21:35:06.859248] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.352 [2024-04-24 21:35:06.859462] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.352 [2024-04-24 21:35:06.859483] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.352 [2024-04-24 21:35:06.859496] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.352 [2024-04-24 21:35:06.862582] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.352 [2024-04-24 21:35:06.871790] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.352 [2024-04-24 21:35:06.872415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-24 21:35:06.872658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-24 21:35:06.872690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.352 [2024-04-24 21:35:06.872710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.352 [2024-04-24 21:35:06.872944] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.352 [2024-04-24 21:35:06.873170] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.352 [2024-04-24 21:35:06.873192] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.352 [2024-04-24 21:35:06.873208] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.352 [2024-04-24 21:35:06.876318] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.352 Malloc0 00:20:41.352 21:35:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.352 21:35:06 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:41.352 21:35:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.352 21:35:06 -- common/autotest_common.sh@10 -- # set +x 00:20:41.352 [2024-04-24 21:35:06.885251] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.352 [2024-04-24 21:35:06.885757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-24 21:35:06.885962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-24 21:35:06.885989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.352 [2024-04-24 21:35:06.886007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.352 [2024-04-24 21:35:06.886254] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.352 [2024-04-24 21:35:06.886462] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.352 [2024-04-24 21:35:06.886484] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.352 [2024-04-24 21:35:06.886500] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.352 [2024-04-24 21:35:06.889790] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.352 21:35:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.352 21:35:06 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:41.352 21:35:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.352 21:35:06 -- common/autotest_common.sh@10 -- # set +x 00:20:41.352 [2024-04-24 21:35:06.898823] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.352 21:35:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.352 21:35:06 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:41.352 [2024-04-24 21:35:06.899256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 21:35:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.352 21:35:06 -- common/autotest_common.sh@10 -- # set +x 00:20:41.352 [2024-04-24 21:35:06.899483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-24 21:35:06.899510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bce160 with addr=10.0.0.2, port=4420 00:20:41.352 [2024-04-24 21:35:06.899526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce160 is same with the state(5) to be set 00:20:41.352 [2024-04-24 21:35:06.899752] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bce160 (9): Bad file descriptor 00:20:41.352 [2024-04-24 21:35:06.899985] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.352 [2024-04-24 21:35:06.900008] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.352 [2024-04-24 21:35:06.900022] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.352 [2024-04-24 21:35:06.902967] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.352 [2024-04-24 21:35:06.903243] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:41.352 21:35:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.352 21:35:06 -- host/bdevperf.sh@38 -- # wait 2677416 00:20:41.352 [2024-04-24 21:35:06.912361] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.352 [2024-04-24 21:35:06.951444] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:51.333 00:20:51.333 Latency(us) 00:20:51.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.333 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:51.333 Verification LBA range: start 0x0 length 0x4000 00:20:51.333 Nvme1n1 : 15.00 6241.59 24.38 8832.62 0.00 8466.71 819.20 25243.50 00:20:51.333 =================================================================================================================== 00:20:51.333 Total : 6241.59 24.38 8832.62 0.00 8466.71 819.20 25243.50 00:20:51.333 21:35:16 -- host/bdevperf.sh@39 -- # sync 00:20:51.333 21:35:16 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:51.333 21:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.333 21:35:16 -- common/autotest_common.sh@10 -- # set +x 00:20:51.333 21:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.333 21:35:16 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:20:51.333 21:35:16 -- host/bdevperf.sh@44 -- # nvmftestfini 00:20:51.333 21:35:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:51.333 21:35:16 -- nvmf/common.sh@117 -- # sync 00:20:51.333 21:35:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:51.333 21:35:16 -- nvmf/common.sh@120 -- # set +e 00:20:51.333 21:35:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:51.333 21:35:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:51.333 rmmod nvme_tcp 00:20:51.333 rmmod nvme_fabrics 00:20:51.333 rmmod nvme_keyring 00:20:51.333 21:35:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:51.333 21:35:16 -- nvmf/common.sh@124 -- # set -e 00:20:51.333 21:35:16 -- nvmf/common.sh@125 -- # return 0 00:20:51.333 21:35:16 -- nvmf/common.sh@478 -- # '[' -n 2678083 ']' 00:20:51.333 21:35:16 -- nvmf/common.sh@479 -- # killprocess 2678083 00:20:51.333 21:35:16 -- common/autotest_common.sh@936 -- # '[' -z 2678083 ']' 00:20:51.333 21:35:16 -- common/autotest_common.sh@940 -- # kill -0 2678083 00:20:51.333 21:35:16 -- common/autotest_common.sh@941 -- # uname 00:20:51.333 21:35:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:51.333 21:35:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2678083 00:20:51.333 21:35:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:51.333 21:35:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:51.333 21:35:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2678083' 00:20:51.333 killing process with pid 2678083 00:20:51.333 21:35:16 -- common/autotest_common.sh@955 -- # kill 2678083 00:20:51.333 21:35:16 -- common/autotest_common.sh@960 -- # wait 2678083 00:20:51.333 21:35:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:51.333 21:35:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:51.333 21:35:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:51.333 21:35:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:51.333 21:35:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:51.333 21:35:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.333 21:35:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:51.333 21:35:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.238 21:35:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:53.238 00:20:53.238 real 0m22.511s 00:20:53.238 user 0m59.324s 00:20:53.238 sys 0m4.466s 00:20:53.238 21:35:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:53.238 21:35:18 -- common/autotest_common.sh@10 -- # set +x 00:20:53.238 ************************************ 00:20:53.238 END TEST nvmf_bdevperf 00:20:53.238 ************************************ 00:20:53.238 21:35:18 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:20:53.238 21:35:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:53.238 21:35:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:53.238 21:35:18 -- common/autotest_common.sh@10 -- # set +x 00:20:53.238 ************************************ 00:20:53.238 START TEST nvmf_target_disconnect 00:20:53.238 ************************************ 00:20:53.238 21:35:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:20:53.238 * Looking for test storage... 00:20:53.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:53.238 21:35:18 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:53.239 21:35:18 -- nvmf/common.sh@7 -- # uname -s 00:20:53.239 21:35:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.239 21:35:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.239 21:35:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.239 21:35:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.239 21:35:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.239 21:35:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.239 21:35:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.239 21:35:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.239 21:35:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.239 21:35:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.239 21:35:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.239 21:35:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.239 21:35:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.239 21:35:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.239 21:35:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:53.239 21:35:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:53.239 21:35:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:53.239 21:35:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.239 21:35:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.239 21:35:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.239 21:35:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.239 21:35:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.239 21:35:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.239 21:35:18 -- paths/export.sh@5 -- # export PATH 00:20:53.239 21:35:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.239 21:35:18 -- nvmf/common.sh@47 -- # : 0 00:20:53.239 21:35:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:53.239 21:35:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:53.239 21:35:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:53.239 21:35:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.239 21:35:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.239 21:35:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:53.239 21:35:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:53.239 21:35:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:53.239 21:35:18 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:53.239 21:35:18 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:20:53.239 21:35:18 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:20:53.239 21:35:18 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:20:53.239 21:35:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:53.239 21:35:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.239 21:35:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:53.239 21:35:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:53.239 21:35:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:53.239 21:35:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.239 21:35:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.239 21:35:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.239 21:35:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:53.239 21:35:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:53.239 21:35:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:53.239 21:35:18 -- common/autotest_common.sh@10 -- # set +x 00:20:55.160 21:35:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:55.160 21:35:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:55.160 21:35:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:55.160 21:35:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:55.160 21:35:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:55.160 21:35:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:55.160 21:35:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:55.160 21:35:20 -- nvmf/common.sh@295 -- # net_devs=() 00:20:55.160 21:35:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:55.160 21:35:20 -- nvmf/common.sh@296 -- # e810=() 00:20:55.160 21:35:20 -- nvmf/common.sh@296 -- # local -ga e810 00:20:55.160 21:35:20 -- nvmf/common.sh@297 -- # x722=() 00:20:55.160 21:35:20 -- nvmf/common.sh@297 -- # local -ga x722 00:20:55.160 21:35:20 -- nvmf/common.sh@298 -- # mlx=() 00:20:55.160 21:35:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:55.160 21:35:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.160 21:35:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.160 21:35:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.160 21:35:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.160 21:35:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.160 21:35:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.160 21:35:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.160 21:35:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.160 21:35:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.160 21:35:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.160 21:35:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.160 21:35:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:55.160 21:35:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:55.160 21:35:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:55.160 21:35:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:55.160 21:35:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:55.160 21:35:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:55.160 21:35:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:55.160 21:35:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:55.160 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:55.160 21:35:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:55.160 21:35:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:55.160 21:35:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.160 21:35:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.160 21:35:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:55.160 21:35:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:55.160 21:35:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:55.160 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:55.160 21:35:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:55.160 21:35:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:55.160 21:35:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.160 21:35:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.160 21:35:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:55.160 21:35:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:55.160 21:35:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:55.160 21:35:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:55.160 21:35:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:55.160 21:35:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.160 21:35:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:55.160 21:35:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.160 21:35:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:55.160 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:55.160 21:35:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.160 21:35:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:55.160 21:35:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.160 21:35:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:55.160 21:35:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.160 21:35:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:55.160 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:55.160 21:35:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.160 21:35:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:55.160 21:35:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:55.160 21:35:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:55.160 21:35:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:55.160 21:35:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:55.160 21:35:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.160 21:35:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.160 21:35:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.160 21:35:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:55.160 21:35:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:55.160 21:35:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:55.160 21:35:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:55.160 21:35:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:55.160 21:35:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.160 21:35:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:55.160 21:35:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:55.160 21:35:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:55.160 21:35:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:55.160 21:35:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:55.160 21:35:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:55.160 21:35:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:55.160 21:35:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:55.160 21:35:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:55.160 21:35:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:55.160 21:35:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:55.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:20:55.160 00:20:55.160 --- 10.0.0.2 ping statistics --- 00:20:55.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.160 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:20:55.160 21:35:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:55.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:20:55.160 00:20:55.160 --- 10.0.0.1 ping statistics --- 00:20:55.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.160 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:20:55.160 21:35:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.418 21:35:20 -- nvmf/common.sh@411 -- # return 0 00:20:55.418 21:35:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:55.418 21:35:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.418 21:35:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:55.418 21:35:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:55.418 21:35:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.418 21:35:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:55.418 21:35:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:55.418 21:35:20 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:20:55.418 21:35:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:55.418 21:35:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:55.418 21:35:20 -- common/autotest_common.sh@10 -- # set +x 00:20:55.418 ************************************ 00:20:55.418 START TEST nvmf_target_disconnect_tc1 00:20:55.418 ************************************ 00:20:55.418 21:35:20 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:20:55.418 21:35:20 -- host/target_disconnect.sh@32 -- # set +e 00:20:55.418 21:35:20 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:55.418 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.418 [2024-04-24 21:35:21.063515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.418 [2024-04-24 21:35:21.063847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.418 [2024-04-24 21:35:21.063878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x503ad0 with addr=10.0.0.2, port=4420 00:20:55.418 [2024-04-24 21:35:21.063910] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:55.418 [2024-04-24 21:35:21.063949] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:55.418 [2024-04-24 21:35:21.063964] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:20:55.418 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:20:55.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:20:55.418 Initializing NVMe Controllers 00:20:55.418 21:35:21 -- host/target_disconnect.sh@33 -- # trap - ERR 00:20:55.418 21:35:21 -- host/target_disconnect.sh@33 -- # print_backtrace 00:20:55.418 21:35:21 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:20:55.418 21:35:21 -- common/autotest_common.sh@1139 -- # return 0 00:20:55.418 21:35:21 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:20:55.418 21:35:21 -- host/target_disconnect.sh@41 -- # set -e 00:20:55.418 00:20:55.418 real 0m0.101s 00:20:55.418 user 0m0.039s 00:20:55.418 sys 0m0.061s 00:20:55.418 21:35:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:55.418 21:35:21 -- common/autotest_common.sh@10 -- # set +x 00:20:55.418 ************************************ 00:20:55.418 END TEST nvmf_target_disconnect_tc1 00:20:55.418 ************************************ 00:20:55.677 21:35:21 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:20:55.677 21:35:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:55.677 21:35:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:55.677 21:35:21 -- common/autotest_common.sh@10 -- # set +x 00:20:55.677 ************************************ 00:20:55.677 START TEST nvmf_target_disconnect_tc2 00:20:55.677 ************************************ 00:20:55.677 21:35:21 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:20:55.677 21:35:21 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:20:55.677 21:35:21 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:20:55.677 21:35:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:55.677 21:35:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:55.677 21:35:21 -- common/autotest_common.sh@10 -- # set +x 00:20:55.677 21:35:21 -- nvmf/common.sh@470 -- # nvmfpid=2681256 00:20:55.677 21:35:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:20:55.677 21:35:21 -- nvmf/common.sh@471 -- # waitforlisten 2681256 00:20:55.677 21:35:21 -- common/autotest_common.sh@817 -- # '[' -z 2681256 ']' 00:20:55.677 21:35:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.677 21:35:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:55.677 21:35:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.677 21:35:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:55.677 21:35:21 -- common/autotest_common.sh@10 -- # set +x 00:20:55.677 [2024-04-24 21:35:21.250483] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:20:55.677 [2024-04-24 21:35:21.250566] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.677 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.677 [2024-04-24 21:35:21.315467] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:55.935 [2024-04-24 21:35:21.429419] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.935 [2024-04-24 21:35:21.429478] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.935 [2024-04-24 21:35:21.429491] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.935 [2024-04-24 21:35:21.429502] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.935 [2024-04-24 21:35:21.429513] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.935 [2024-04-24 21:35:21.429577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:55.935 [2024-04-24 21:35:21.429641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:55.935 [2024-04-24 21:35:21.429764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:20:55.935 [2024-04-24 21:35:21.429767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:55.935 21:35:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:55.935 21:35:21 -- common/autotest_common.sh@850 -- # return 0 00:20:55.935 21:35:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:55.935 21:35:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:55.935 21:35:21 -- common/autotest_common.sh@10 -- # set +x 00:20:55.935 21:35:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.935 21:35:21 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:55.935 21:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.935 21:35:21 -- common/autotest_common.sh@10 -- # set +x 00:20:56.193 Malloc0 00:20:56.193 21:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.193 21:35:21 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:56.193 21:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.193 21:35:21 -- common/autotest_common.sh@10 -- # set +x 00:20:56.193 [2024-04-24 21:35:21.626287] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.193 21:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.193 21:35:21 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:56.193 21:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.193 21:35:21 -- common/autotest_common.sh@10 -- # set +x 00:20:56.193 21:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.193 21:35:21 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:56.193 21:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.193 21:35:21 -- common/autotest_common.sh@10 -- # set +x 00:20:56.193 21:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.193 21:35:21 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:56.193 21:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.193 21:35:21 -- common/autotest_common.sh@10 -- # set +x 00:20:56.193 [2024-04-24 21:35:21.654529] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.193 21:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.194 21:35:21 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:56.194 21:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.194 21:35:21 -- common/autotest_common.sh@10 -- # set +x 00:20:56.194 21:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.194 21:35:21 -- host/target_disconnect.sh@50 -- # reconnectpid=2681280 00:20:56.194 21:35:21 -- host/target_disconnect.sh@52 -- # sleep 2 00:20:56.194 21:35:21 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:56.194 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.101 21:35:23 -- host/target_disconnect.sh@53 -- # kill -9 2681256 00:20:58.101 21:35:23 -- host/target_disconnect.sh@55 -- # sleep 2 00:20:58.101 Read completed with error (sct=0, sc=8) 00:20:58.101 starting I/O failed 00:20:58.101 Read completed with error (sct=0, sc=8) 00:20:58.101 starting I/O failed 00:20:58.101 Read completed with error (sct=0, sc=8) 00:20:58.101 starting I/O failed 00:20:58.101 Read completed with error (sct=0, sc=8) 00:20:58.101 starting I/O failed 00:20:58.101 Read completed with error (sct=0, sc=8) 00:20:58.101 starting I/O failed 00:20:58.101 Read completed with error (sct=0, sc=8) 00:20:58.101 starting I/O failed 00:20:58.101 Read completed with error (sct=0, sc=8) 00:20:58.101 starting I/O failed 00:20:58.101 Read completed with error (sct=0, sc=8) 00:20:58.101 starting I/O failed 00:20:58.101 Read completed with error (sct=0, sc=8) 00:20:58.101 starting I/O failed 00:20:58.101 Read completed with error (sct=0, sc=8) 00:20:58.101 starting I/O failed 00:20:58.101 Read completed with error (sct=0, sc=8) 00:20:58.101 starting I/O failed 00:20:58.101 Read completed with error (sct=0, sc=8) 00:20:58.101 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 [2024-04-24 21:35:23.679101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 [2024-04-24 21:35:23.679392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Read completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 Write completed with error (sct=0, sc=8) 00:20:58.102 starting I/O failed 00:20:58.102 [2024-04-24 21:35:23.679712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:58.102 [2024-04-24 21:35:23.679942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.102 [2024-04-24 21:35:23.680132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.102 [2024-04-24 21:35:23.680159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.102 qpair failed and we were unable to recover it. 00:20:58.102 [2024-04-24 21:35:23.680326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.102 [2024-04-24 21:35:23.680540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.102 [2024-04-24 21:35:23.680567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.102 qpair failed and we were unable to recover it. 00:20:58.102 [2024-04-24 21:35:23.680744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.102 [2024-04-24 21:35:23.680938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.102 [2024-04-24 21:35:23.680964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.102 qpair failed and we were unable to recover it. 00:20:58.102 [2024-04-24 21:35:23.681168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.102 [2024-04-24 21:35:23.681379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.102 [2024-04-24 21:35:23.681406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.102 qpair failed and we were unable to recover it. 00:20:58.102 [2024-04-24 21:35:23.681657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.102 [2024-04-24 21:35:23.681826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.102 [2024-04-24 21:35:23.681853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.102 qpair failed and we were unable to recover it. 00:20:58.102 [2024-04-24 21:35:23.682026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.102 [2024-04-24 21:35:23.682243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.102 [2024-04-24 21:35:23.682269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.102 qpair failed and we were unable to recover it. 00:20:58.102 [2024-04-24 21:35:23.682426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.102 [2024-04-24 21:35:23.682618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.682653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.682817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.682987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.683013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.683199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.683479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.683505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.683733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.683919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.683945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.684138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.684349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.684375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.684587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.684760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.684787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.684957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.685112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.685153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.685350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.685511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.685551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.685787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.685953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.685993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.686304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.686508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.686532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.686794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.686978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.687004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.687237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.687490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.687532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.687795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.687960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.688001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.688202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.688509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.688534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.688765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.688977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.689020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.689310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.689530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.689556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.689787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.689993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.690035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.690223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.690514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.690561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.690819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.691022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.691066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.691278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.691632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.691660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.691856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.692112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.692152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.692366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.692607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.692654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.692811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.693014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.693039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.693319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.693622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.693675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.693844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.694031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.694072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.694318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.694542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.694568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.694813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.694968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.694994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.695182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.695364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.695390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.103 [2024-04-24 21:35:23.695854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.696179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.103 [2024-04-24 21:35:23.696207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.103 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.696423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.696679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.696720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.696916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.697213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.697242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.697455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.697710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.697737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.697933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.698122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.698149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.698386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.698669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.698695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.698891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.699103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.699145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.699411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.699658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.699700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.699925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.700130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.700173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.700385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.700652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.700678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.700882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.701161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.701186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.701370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.701582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.701608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.701836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.702064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.702106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.702313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.702514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.702539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.702696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.702920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.702946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.703165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.703405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.703430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.703642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.703851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.703876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.704074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.704331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.704373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.704555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.704750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.704792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.705027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.705250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.705275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.705483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.705761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.705804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.706051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.706351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.706379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.706610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.706824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.706868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.707083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.707320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.707382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.707564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.707802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.707846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.708105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.708345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.708370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.708553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.708787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.708813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.104 qpair failed and we were unable to recover it. 00:20:58.104 [2024-04-24 21:35:23.709009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.104 [2024-04-24 21:35:23.709179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.709205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.709387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.709622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.709652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.709917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.710143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.710168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.710386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.710574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.710613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.710885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.711117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.711159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.711417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.711620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.711652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.711884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.712086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.712111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.712297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.712504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.712547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.712786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.712972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.713015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.713242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.713449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.713475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.713695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.713979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.714025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.714242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.714568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.714593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.714813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.715120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.715145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.715324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.715504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.715529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.715739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.715981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.716025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.716245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.716577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.716619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.716810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.716989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.717014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.717260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.717510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.717557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.717753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.717949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.717992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.718283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.718515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.718540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.718736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.719018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.719043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.719252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.719469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.719495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.719708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.719925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.719952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.720170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.720413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.720439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.720634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.720846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.720876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.721101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.721254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.721281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.721516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.721739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.721765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.722012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.722238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.722264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.105 qpair failed and we were unable to recover it. 00:20:58.105 [2024-04-24 21:35:23.722533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.105 [2024-04-24 21:35:23.722740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.722766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.106 qpair failed and we were unable to recover it. 00:20:58.106 [2024-04-24 21:35:23.722944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.723185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.723214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.106 qpair failed and we were unable to recover it. 00:20:58.106 [2024-04-24 21:35:23.723484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.723782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.723810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.106 qpair failed and we were unable to recover it. 00:20:58.106 [2024-04-24 21:35:23.724060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.724252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.724277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.106 qpair failed and we were unable to recover it. 00:20:58.106 [2024-04-24 21:35:23.724463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.724671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.724698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.106 qpair failed and we were unable to recover it. 00:20:58.106 [2024-04-24 21:35:23.724929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.725171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.725196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.106 qpair failed and we were unable to recover it. 00:20:58.106 [2024-04-24 21:35:23.725414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.725683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.725713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.106 qpair failed and we were unable to recover it. 00:20:58.106 [2024-04-24 21:35:23.725958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.726190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.726238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.106 qpair failed and we were unable to recover it. 00:20:58.106 [2024-04-24 21:35:23.726449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.726694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.726735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.106 qpair failed and we were unable to recover it. 00:20:58.106 [2024-04-24 21:35:23.726970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.727176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.727201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.106 qpair failed and we were unable to recover it. 00:20:58.106 [2024-04-24 21:35:23.727419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.727596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.727621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.106 qpair failed and we were unable to recover it. 00:20:58.106 [2024-04-24 21:35:23.727850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.728110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.728152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.106 qpair failed and we were unable to recover it. 00:20:58.106 [2024-04-24 21:35:23.728393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.728599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.728626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.106 qpair failed and we were unable to recover it. 00:20:58.106 [2024-04-24 21:35:23.728888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.729164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.729206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.106 qpair failed and we were unable to recover it. 00:20:58.106 [2024-04-24 21:35:23.729445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.729637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.729662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.106 qpair failed and we were unable to recover it. 00:20:58.106 [2024-04-24 21:35:23.729900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.730107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.730134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.106 qpair failed and we were unable to recover it. 00:20:58.106 [2024-04-24 21:35:23.730414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.730598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.730641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.106 qpair failed and we were unable to recover it. 00:20:58.106 [2024-04-24 21:35:23.730794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.731017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.731060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.106 qpair failed and we were unable to recover it. 00:20:58.106 [2024-04-24 21:35:23.731324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.731596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.731621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.106 qpair failed and we were unable to recover it. 00:20:58.106 [2024-04-24 21:35:23.731843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.732105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.732145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.106 qpair failed and we were unable to recover it. 00:20:58.106 [2024-04-24 21:35:23.732343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.732554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.732580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.106 qpair failed and we were unable to recover it. 00:20:58.106 [2024-04-24 21:35:23.732807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.733046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.733074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.106 qpair failed and we were unable to recover it. 00:20:58.106 [2024-04-24 21:35:23.733356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.733555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.106 [2024-04-24 21:35:23.733579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.733752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.733989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.734018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.734244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.734472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.734515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.734743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.734994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.735020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.735269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.735475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.735501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.735675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.735910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.735937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.736172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.736455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.736525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.736724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.736925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.736950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.737185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.737371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.737397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.737587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.737802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.737829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.738071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.738508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.738568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.738759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.738949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.738974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.739193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.739427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.739451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.739643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.739852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.739878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.740114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.740330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.740356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.740549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.740736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.740778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.741026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.741248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.741273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.741456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.741676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.741703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.741883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.742062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.742088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.742329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.742632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.742659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.742853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.743058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.743083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.743332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.743529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.743555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.743789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.744000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.744044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.744299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.744491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.744517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.744707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.744990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.745016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.745196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.745375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.745414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.745591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.745789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.745815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.746052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.746282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.746325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.107 [2024-04-24 21:35:23.746527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.746773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.107 [2024-04-24 21:35:23.746817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.107 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.747009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.747191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.747217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.747426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.747641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.747667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.747858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.748062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.748088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.748300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.748507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.748532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.748766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.749015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.749059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.749301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.749487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.749512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.749734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.750024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.750049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.750268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.750593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.750668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.750880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.751112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.751141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.751345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.751591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.751618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.751830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.751990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.752033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.752271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.752575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.752625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.752849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.753108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.753133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.753352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.753553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.753578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.753798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.753998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.754023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.754259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.754583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.754634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.754843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.755046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.755089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.755282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.755502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.755528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.755755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.755989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.756030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.756249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.756469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.756514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.756732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.756962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.757004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.757222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.757487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.757512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.757765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.758149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.758205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.758444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.758715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.758741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.758951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.759198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.759241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.759469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.759734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.759777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.759982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.760233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.760275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.760536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.760734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.760761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.108 qpair failed and we were unable to recover it. 00:20:58.108 [2024-04-24 21:35:23.761086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.108 [2024-04-24 21:35:23.761362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.761412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.761593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.761823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.761849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.762125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.762362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.762389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.762573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.762788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.762814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.763012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.763264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.763307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.763492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.763697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.763740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.763931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.764167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.764193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.764413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.764678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.764704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.764921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.765105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.765148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.765363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.765601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.765648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.765870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.766093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.766118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.766297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.766518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.766544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.766753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.767042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.767088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.767285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.767493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.767520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.767723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.767988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.768032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.768187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.768457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.768482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.768684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.768883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.768910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.769123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.769315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.769341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.769535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.769763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.769803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.770020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.770247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.770273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.770499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.770740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.770766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.770931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.771093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.771119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.771290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.771490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.771516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.771712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.771870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.771898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.772064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.772304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.772348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.772546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.772749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.772775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.772983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.773199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.773242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.773488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.773707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.109 [2024-04-24 21:35:23.773733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.109 qpair failed and we were unable to recover it. 00:20:58.109 [2024-04-24 21:35:23.773903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.378 [2024-04-24 21:35:23.774123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.378 [2024-04-24 21:35:23.774167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.378 qpair failed and we were unable to recover it. 00:20:58.378 [2024-04-24 21:35:23.774320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.378 [2024-04-24 21:35:23.774523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.378 [2024-04-24 21:35:23.774549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.378 qpair failed and we were unable to recover it. 00:20:58.378 [2024-04-24 21:35:23.774821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.378 [2024-04-24 21:35:23.775011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.378 [2024-04-24 21:35:23.775037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.378 qpair failed and we were unable to recover it. 00:20:58.378 [2024-04-24 21:35:23.775265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.378 [2024-04-24 21:35:23.775448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.378 [2024-04-24 21:35:23.775474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.378 qpair failed and we were unable to recover it. 00:20:58.378 [2024-04-24 21:35:23.775661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.378 [2024-04-24 21:35:23.775921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.378 [2024-04-24 21:35:23.775960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.378 qpair failed and we were unable to recover it. 00:20:58.378 [2024-04-24 21:35:23.776218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.378 [2024-04-24 21:35:23.776433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.378 [2024-04-24 21:35:23.776459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.378 qpair failed and we were unable to recover it. 00:20:58.378 [2024-04-24 21:35:23.776674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.378 [2024-04-24 21:35:23.776887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.378 [2024-04-24 21:35:23.776913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.378 qpair failed and we were unable to recover it. 00:20:58.378 [2024-04-24 21:35:23.777101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.378 [2024-04-24 21:35:23.777357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.777399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.777620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.777823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.777849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.778038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.778287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.778314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.778558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.778787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.778813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.779032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.779263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.779308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.779580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.779797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.779824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.780113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.780474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.780525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.780758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.780977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.781021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.781271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.781550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.781594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.781784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.781937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.781963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.782217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.782502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.782527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.782759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.783039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.783082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.783402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.783598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.783623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.783850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.784084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.784127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.784347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.784541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.784568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.784793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.785066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.785091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.785315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.785492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.785519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.785744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.785919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.785945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.786184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.786389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.786416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.786592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.786863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.786905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.787160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.787480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.787523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.787724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.787913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.787941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.788138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.788327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.788354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.788575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.788798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.788825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.788983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.789262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.789305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.789543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.789740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.789766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.789963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.790120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.790147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.790301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.790510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.790535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.790762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.790972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.790997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.379 qpair failed and we were unable to recover it. 00:20:58.379 [2024-04-24 21:35:23.791223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.379 [2024-04-24 21:35:23.791455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.791483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.791722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.791984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.792026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.792199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.792359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.792385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.792598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.792855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.792898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.793116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.793322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.793353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.793547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.793749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.793775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.793995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.794252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.794279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.794559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.794736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.794762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.794959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.795161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.795188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.795401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.795600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.795626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.795842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.796108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.796151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.796351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.796574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.796599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.796909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.797257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.797319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.797577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.797769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.797795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.797992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.798223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.798270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.798481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.798661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.798688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.798927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.799266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.799316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.799512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.799727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.799754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.799960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.800328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.800370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.800696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.800884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.800909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.801126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.801376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.801421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.801642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.801842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.801867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.802103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.802367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.802410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.802638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.802840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.802866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.803096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.803336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.803367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.803553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.803773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.803800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.804051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.804277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.804324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.804512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.804800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.804860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.380 [2024-04-24 21:35:23.805032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.805287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.380 [2024-04-24 21:35:23.805331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.380 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.805526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.805749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.805796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.806035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.806293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.806336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.806551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.806787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.806813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.807046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.807326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.807370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.807739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.807975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.808018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.808261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.808696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.808726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.808902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.809212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.809237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.809428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.809677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.809704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.809910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.810144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.810186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.810401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.810648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.810689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.810907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.811068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.811094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.811280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.811482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.811508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.811720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.811974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.812018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.812226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.812472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.812499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.812733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.812956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.812999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.813228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.813542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.813567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.813801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.814070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.814114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.814292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.814496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.814521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.814764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.815022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.815064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.815281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.815442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.815468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.815654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.815829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.815855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.816085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.816333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.816376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.816583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.816778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.816804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.816980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.817211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.817238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.817502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.817760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.817801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.817999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.818208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.818251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.818498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.818732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.818758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.818959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.819156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.381 [2024-04-24 21:35:23.819198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.381 qpair failed and we were unable to recover it. 00:20:58.381 [2024-04-24 21:35:23.819399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.819637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.819663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.819849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.820046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.820074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.820342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.820549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.820574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.820794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.821059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.821084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.821313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.821537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.821562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.821779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.822019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.822048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.822315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.822514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.822539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.822752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.822944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.822970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.823305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.823552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.823576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.823757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.823944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.823987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.824202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.824453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.824495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.824716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.825000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.825043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.825257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.825492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.825534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.825859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.826027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.826052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.826259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.826516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.826556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.826817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.827111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.827136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.827395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.827616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.827647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.827842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.828051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.828093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.828330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.828555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.828580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.828779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.828961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.828988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.829206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.829385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.829429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.829618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.829827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.829853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.830066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.830292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.830335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.830537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.830744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.830771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.830984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.831237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.831279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.831534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.831763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.831790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.831977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.832261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.832304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.832510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.832718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.832745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.382 qpair failed and we were unable to recover it. 00:20:58.382 [2024-04-24 21:35:23.832979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.382 [2024-04-24 21:35:23.833222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.833251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.833495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.833723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.833750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.833965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.834246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.834271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.834593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.834835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.834860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.835065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.835266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.835291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.835548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.835749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.835776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.836146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.836325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.836350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.836546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.836732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.836759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.836965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.837207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.837238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.837477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.837695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.837722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.837976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.838261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.838286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.838513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.838758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.838783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.838989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.839172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.839198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.839412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.839587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.839612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.839818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.840037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.840064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.840301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.840531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.840557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.840790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.841025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.841054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.841265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.841480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.841505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.841741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.841938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.841964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.842180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.842363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.842390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.842613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.842871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.842914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.843201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.843390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.843417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.843611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.843807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.843833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.383 [2024-04-24 21:35:23.844025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.844238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.383 [2024-04-24 21:35:23.844264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.383 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.844556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.844835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.844861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.845101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.845462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.845517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.845706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.845939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.845968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.846251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.846477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.846503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.846757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.847043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.847086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.847341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.847567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.847593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.847824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.848004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.848048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.848257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.848450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.848476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.848671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.848883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.848909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.849120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.849433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.849458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.849687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.849916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.849941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.850242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.850398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.850424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.850634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.850842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.850868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.851073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.851282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.851308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.851519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.851708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.851734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.851973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.852231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.852273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.852528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.852842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.852868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.853128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.853472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.853522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.853770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.853980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.854005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.854196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.854367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.854392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.854607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.854828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.854854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.855142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.855327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.855352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.855547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.855723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.855748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.855998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.856263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.856318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.856496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.856702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.856728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.856947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.857176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.857220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.857437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.857679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.857705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.857947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.858185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.858214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.384 [2024-04-24 21:35:23.858440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.858673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.384 [2024-04-24 21:35:23.858699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.384 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.858901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.859307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.859368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.859550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.859765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.859790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.860070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.860293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.860319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.860574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.860774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.860800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.861013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.861250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.861275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.861495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.861699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.861725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.861916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.862162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.862205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.862469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.862702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.862728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.862960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.863172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.863216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.863440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.863689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.863730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.863948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.864144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.864187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.864399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.864615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.864645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.864824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.865003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.865029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.865303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.865534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.865576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.865784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.866022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.866051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.866285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.866582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.866606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.866821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.867067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.867095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.867362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.867587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.867636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.867850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.868114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.868153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.868475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.868815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.868840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.869041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.869286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.869330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.869568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.869792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.869818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.870041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.870276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.870301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.870506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.870745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.870770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.870989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.871259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.871301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.871503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.871727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.871753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.871972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.872250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.872295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.872502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.872745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.872791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.385 qpair failed and we were unable to recover it. 00:20:58.385 [2024-04-24 21:35:23.872988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.873198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.385 [2024-04-24 21:35:23.873241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.873461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.873689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.873734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.873971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.874223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.874271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.874490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.874704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.874730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.874944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.875136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.875162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.875396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.875590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.875616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.875791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.876020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.876045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.876244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.876701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.876727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.876948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.877259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.877284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.877489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.877701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.877731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.877949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.878240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.878282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.878505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.878705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.878731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.878967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.879278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.879334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.879572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.879768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.879795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.879982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.880247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.880274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.880431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.880638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.880665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.880906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.881121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.881165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.881417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.881644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.881670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.881877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.882134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.882174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.882398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.882609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.882645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.882876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.883137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.883177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.883386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.883617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.883648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.883864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.884171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.884199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.884388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.884650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.884677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.884861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.885068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.885111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.885321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.885547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.885572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.885793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.886076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.886120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.886440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.886723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.886749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.886974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.887252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.386 [2024-04-24 21:35:23.887295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.386 qpair failed and we were unable to recover it. 00:20:58.386 [2024-04-24 21:35:23.887514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.887764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.887795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.888006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.888188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.888215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.888434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.888615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.888656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.888835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.889053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.889078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.889282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.889524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.889567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.889760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.890054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.890097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.890330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.890585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.890626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.890872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.891127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.891169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.891412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.891676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.891703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.891904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.892080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.892106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.892341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.892571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.892596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.892811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.893017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.893061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.893252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.893467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.893492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.893701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.893910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.893936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.894186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.894576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.894632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.894855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.895042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.895068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.895335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.895514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.895539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.895767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.896016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.896059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.896266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.896500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.896526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.896743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.897007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.897034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.897224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.897432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.897458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.897685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.897903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.897929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.898137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.898395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.898437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.898645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.898878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.898903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.387 qpair failed and we were unable to recover it. 00:20:58.387 [2024-04-24 21:35:23.899083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.387 [2024-04-24 21:35:23.899276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.899303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.899467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.899682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.899708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.899921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.900185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.900211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.900417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.900656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.900681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.900888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.901096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.901139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.901416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.901611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.901646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.901915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.902131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.902157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.902354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.902587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.902613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.902830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.903088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.903114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.903306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.903711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.903738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.903955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.904179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.904204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.904459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.904695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.904721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.904942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.905156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.905201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.905399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.905599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.905625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.905859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.906097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.906126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.906347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.906600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.906625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.906824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.907071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.907114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.907276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.907452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.907477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.907713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.907890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.907932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.908141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.908373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.908398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.908603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.908876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.908902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.909152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.909384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.909426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.909624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.909837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.909862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.910118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.910498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.910546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.910788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.911004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.911031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.911455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.911770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.911796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.912015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.912242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.912285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.912519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.912729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.912756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.913036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.913262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.913288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.388 qpair failed and we were unable to recover it. 00:20:58.388 [2024-04-24 21:35:23.913546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.388 [2024-04-24 21:35:23.913759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.913785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.914026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.914227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.914252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.914498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.914851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.914876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.915088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.915318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.915360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.915598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.915828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.915855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.916109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.916372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.916412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.916640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.916880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.916906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.917116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.917500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.917557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.917787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.918024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.918075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.918257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.918505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.918547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.918740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.918963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.919007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.919202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.919442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.919471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.919737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.920062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.920091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.920348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.920572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.920597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.920813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.921011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.921054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.921297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.921547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.921586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.921763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.921950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.921992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.922237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.922489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.922532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.922734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.922970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.922996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.923188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.923452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.923478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.923704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.923918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.923963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.924201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.924511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.924535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.924728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.924965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.924994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.925247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.925438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.925464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.925684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.925942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.925984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.926239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.926464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.926508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.926709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.926961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.927005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.927217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.927405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.927432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.927670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.927897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.389 [2024-04-24 21:35:23.927924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.389 qpair failed and we were unable to recover it. 00:20:58.389 [2024-04-24 21:35:23.928180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.928607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.928674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.928855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.929093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.929122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.929417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.929612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.929642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.929855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.930111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.930153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.930399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.930625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.930659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.930879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.931070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.931113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.931299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.931493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.931520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.931711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.931929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.931976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.932155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.932368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.932394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.932710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.932928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.932973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.933224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.933454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.933481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.933717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.934009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.934057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.934299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.934489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.934518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.934751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.934989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.935031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.935304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.935509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.935534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.935760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.935992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.936036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.936253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.936496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.936525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.936775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.936993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.937023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.937196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.937379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.937409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.937626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.937920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.937946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.938189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.938409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.938435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.938636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.938876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.938924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.939162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.939456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.939482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.939673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.939945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.939989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.940211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.940401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.940428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.940595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.940907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.940933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.941103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.941274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.941305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.941522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.941724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.941750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.390 [2024-04-24 21:35:23.941960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.942167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.390 [2024-04-24 21:35:23.942193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.390 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.942415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.942684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.942715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.942943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.943201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.943243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.943471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.943651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.943678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.943887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.944116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.944161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.944368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.944584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.944612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.944861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.945104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.945150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.945373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.945586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.945611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.945856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.946241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.946293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.946512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.946688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.946714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.946926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.947126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.947171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.947352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.947505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.947535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.947766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.947953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.947979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.948182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.948413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.948438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.948663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.948907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.948950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.949283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.949471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.949497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.949684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.949897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.949942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.950108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.950350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.950391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.950595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.950880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.950924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.951388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.951591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.951636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.951898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.952264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.952318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.952537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.952714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.952745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.953012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.953248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.953293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.953552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.953769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.953813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.954053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.954356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.954383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.954585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.954840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.954884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.955091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.955328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.955353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.955551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.955847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.955891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.956131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.956379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.956406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.391 [2024-04-24 21:35:23.956634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.956848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.391 [2024-04-24 21:35:23.956890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.391 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.957144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.957358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.957384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.957586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.957944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.957976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.958224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.958422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.958448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.958645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.958883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.958913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.959150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.959413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.959438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.959676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.959924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.959968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.960205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.960435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.960464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.960663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.960877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.960906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.961117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.961379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.961405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.961617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.961863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.961907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.962183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.962408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.962434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.962696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.962934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.962983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.963222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.963446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.963487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.963698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.963923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.963948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.964191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.964416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.964443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.964648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.964945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.964992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.965294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.965503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.965530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.965750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.965980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.966025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.966239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.966477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.966508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.966692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.966922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.966965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.967185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.967391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.967418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.967649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.967864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.967910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.968161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.968353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.968380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.968571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.968782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.968826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.392 qpair failed and we were unable to recover it. 00:20:58.392 [2024-04-24 21:35:23.969048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.392 [2024-04-24 21:35:23.969291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.969318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.969501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.969770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.969814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.970053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.970341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.970368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.970560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.970771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.970815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.971054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.971256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.971283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.971485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.971770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.971799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.972023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.972222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.972249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.972439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.972602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.972650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.972894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.973073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.973099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.973296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.973564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.973591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.973783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.974072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.974115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.974337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.974537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.974565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.974801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.975009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.975035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.975256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.975418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.975444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.975633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.975891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.975935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.976101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.976311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.976354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.976546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.976722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.976749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.976971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.977208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.977252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.977490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.977690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.977718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.977935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.978150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.978177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.978412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.978624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.978656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.978844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.979099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.979142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.979335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.979589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.979634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.979851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.980087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.980113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.980319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.980525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.980551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.980767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.981026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.981072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.981297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.981499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.981525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.981785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.981974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.982000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.982245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.982442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.982467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.393 qpair failed and we were unable to recover it. 00:20:58.393 [2024-04-24 21:35:23.982706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.393 [2024-04-24 21:35:23.982942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.982986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.983268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.983458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.983484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.983720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.983946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.983977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.984188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.984446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.984473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.984754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.985016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.985063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.985273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.985452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.985483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.985739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.985922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.985948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.986164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.986381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.986408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.986632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.986838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.986869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.987114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.987321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.987351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.987541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.987773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.987801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.988071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.988457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.988514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.988691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.988895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.988941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.989157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.989387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.989414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.989582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.989805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.989854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.990090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.990421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.990447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.990649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.990860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.990904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.991086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.991264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.991292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.991492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.991673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.991704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.991922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.992161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.992187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.992404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.992561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.992588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.992804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.993064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.993110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.993335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.993550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.993576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.993793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.994056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.994100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.994255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.994450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.994477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.994667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.994881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.994907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.995101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.995311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.995337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.995527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.995736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.995780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.995972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.996206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.996233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:58.394 qpair failed and we were unable to recover it. 00:20:58.394 [2024-04-24 21:35:23.996463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.394 [2024-04-24 21:35:23.996642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:23.996690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:23.996898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:23.997063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:23.997092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:23.997297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:23.997618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:23.997652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:23.997875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:23.998084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:23.998112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:23.998313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:23.998679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:23.998706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:23.998938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:23.999310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:23.999361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:23.999537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:23.999755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:23.999782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.000021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.000273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.000301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.000484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.000651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.000677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.000863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.001048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.001077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.001305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.001542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.001569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.001728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.001966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.001994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.002301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.002569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.002595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.002766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.002961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.003003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.003176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.003386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.003415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.003645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.003804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.003830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.004041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.004300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.004352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.004557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.004766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.004792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.005003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.005246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.005275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.005513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.005677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.005703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.005987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.006365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.006431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.006681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.006872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.006897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.007113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.007448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.007495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.007733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.007913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.007942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.008241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.008601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.008679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.008936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.009259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.009310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.009538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.009749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.009775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.009986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.010216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.010244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.010513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.010773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.010799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.395 qpair failed and we were unable to recover it. 00:20:58.395 [2024-04-24 21:35:24.011008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.395 [2024-04-24 21:35:24.011277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.011318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.011537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.011749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.011784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.011971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.012152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.012195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.012555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.012782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.012808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.013019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.013251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.013280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.013484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.013735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.013761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.013970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.014158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.014218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.014571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.014812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.014838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.015025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.015263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.015291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.015531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.015721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.015748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.015961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.016142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.016170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.016377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.016610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.016644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.016886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.017292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.017343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.017577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.017794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.017823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.018101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.018305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.018334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.018561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.018798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.018827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.019032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.019299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.019347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.019574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.019803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.019832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.020158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.020591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.020656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.020895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.021073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.021099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.021307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.021663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.021692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.021875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.022083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.022112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.022461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.022774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.022803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.023029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.023263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.023291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.023496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.023729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.023759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.023941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.024150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.024179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.024544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.024785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.024814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.396 qpair failed and we were unable to recover it. 00:20:58.396 [2024-04-24 21:35:24.024991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.396 [2024-04-24 21:35:24.025340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.025391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.025600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.025775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.025803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.026034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.026257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.026282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.026605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.026852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.026881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.027115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.027323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.027349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.027555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.027770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.027800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.028001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.028210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.028238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.028444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.028674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.028703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.028886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.029112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.029137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.029384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.029584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.029612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.029860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.030271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.030322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.030702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.030905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.030933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.031182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.031481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.031538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.031748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.031956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.031984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.032189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.032415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.032441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.032661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.032853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.032879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.033147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.033381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.033422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.033643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.033882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.033923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.034153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.034517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.034570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.034758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.034985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.035010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.035240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.035398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.035438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.035623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.035843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.035871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.036078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.036256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.036284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.036702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.036930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.036958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.037166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.037602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.037682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.037911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.038145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.038174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.397 qpair failed and we were unable to recover it. 00:20:58.397 [2024-04-24 21:35:24.038397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.038612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.397 [2024-04-24 21:35:24.038654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.398 qpair failed and we were unable to recover it. 00:20:58.398 [2024-04-24 21:35:24.038840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.039165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.039215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.398 qpair failed and we were unable to recover it. 00:20:58.398 [2024-04-24 21:35:24.039431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.039671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.039700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.398 qpair failed and we were unable to recover it. 00:20:58.398 [2024-04-24 21:35:24.039903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.040086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.040115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.398 qpair failed and we were unable to recover it. 00:20:58.398 [2024-04-24 21:35:24.040319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.040498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.040527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.398 qpair failed and we were unable to recover it. 00:20:58.398 [2024-04-24 21:35:24.040764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.040955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.040981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.398 qpair failed and we were unable to recover it. 00:20:58.398 [2024-04-24 21:35:24.041217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.041516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.041544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.398 qpair failed and we were unable to recover it. 00:20:58.398 [2024-04-24 21:35:24.041720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.041925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.041954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.398 qpair failed and we were unable to recover it. 00:20:58.398 [2024-04-24 21:35:24.042173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.042389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.042417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.398 qpair failed and we were unable to recover it. 00:20:58.398 [2024-04-24 21:35:24.042606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.042822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.042852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.398 qpair failed and we were unable to recover it. 00:20:58.398 [2024-04-24 21:35:24.043065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.043294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.043323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.398 qpair failed and we were unable to recover it. 00:20:58.398 [2024-04-24 21:35:24.043534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.043707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.043736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.398 qpair failed and we were unable to recover it. 00:20:58.398 [2024-04-24 21:35:24.043944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.044144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.044173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.398 qpair failed and we were unable to recover it. 00:20:58.398 [2024-04-24 21:35:24.044385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.044600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.398 [2024-04-24 21:35:24.044636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.398 qpair failed and we were unable to recover it. 00:20:58.398 [2024-04-24 21:35:24.044850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.045035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.045062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.698 qpair failed and we were unable to recover it. 00:20:58.698 [2024-04-24 21:35:24.045326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.045552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.045580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.698 qpair failed and we were unable to recover it. 00:20:58.698 [2024-04-24 21:35:24.045791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.045969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.045999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.698 qpair failed and we were unable to recover it. 00:20:58.698 [2024-04-24 21:35:24.046200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.046406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.046435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.698 qpair failed and we were unable to recover it. 00:20:58.698 [2024-04-24 21:35:24.046653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.046843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.046870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.698 qpair failed and we were unable to recover it. 00:20:58.698 [2024-04-24 21:35:24.047056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.047329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.047383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.698 qpair failed and we were unable to recover it. 00:20:58.698 [2024-04-24 21:35:24.047601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.047835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.047864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.698 qpair failed and we were unable to recover it. 00:20:58.698 [2024-04-24 21:35:24.048092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.048292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.048321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.698 qpair failed and we were unable to recover it. 00:20:58.698 [2024-04-24 21:35:24.048526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.048754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.048784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.698 qpair failed and we were unable to recover it. 00:20:58.698 [2024-04-24 21:35:24.049014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.049370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.049421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.698 qpair failed and we were unable to recover it. 00:20:58.698 [2024-04-24 21:35:24.049650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.049835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.049863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.698 qpair failed and we were unable to recover it. 00:20:58.698 [2024-04-24 21:35:24.050193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.050659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.050688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.698 qpair failed and we were unable to recover it. 00:20:58.698 [2024-04-24 21:35:24.050920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.051253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.051314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.698 qpair failed and we were unable to recover it. 00:20:58.698 [2024-04-24 21:35:24.051546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.051756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.051785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.698 qpair failed and we were unable to recover it. 00:20:58.698 [2024-04-24 21:35:24.051985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.052216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.052242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.698 qpair failed and we were unable to recover it. 00:20:58.698 [2024-04-24 21:35:24.052558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.052791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.052819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.698 qpair failed and we were unable to recover it. 00:20:58.698 [2024-04-24 21:35:24.053006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.053193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.053219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.698 qpair failed and we were unable to recover it. 00:20:58.698 [2024-04-24 21:35:24.053371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.053621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.053655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.698 qpair failed and we were unable to recover it. 00:20:58.698 [2024-04-24 21:35:24.053860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.054180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.054242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.698 qpair failed and we were unable to recover it. 00:20:58.698 [2024-04-24 21:35:24.054499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.054733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.698 [2024-04-24 21:35:24.054763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.698 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.054975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.055248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.055299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.055538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.055744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.055773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.056348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.056592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.056622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.056833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.057060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.057086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.057255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.057462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.057491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.057731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.057933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.057961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.058136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.058411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.058459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.058700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.058877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.058906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.059102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.059343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.059390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.059581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.059800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.059829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.060029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.060239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.060267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.060442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.060604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.060640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.060834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.061086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.061145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.061392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.061597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.061626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.061818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.061993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.062022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.062397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.062653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.062683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.062866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.063042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.063076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.063248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.063581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.063641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.063860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.064186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.064243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.064487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.064693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.064723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.064937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.065137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.065166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.065395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.065623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.065657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.065871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.066219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.066257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.066673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.066856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.066884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.067079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.067317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.067343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.067548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.067747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.067773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.067975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.068225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.068272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.068657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.068845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.068874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.699 qpair failed and we were unable to recover it. 00:20:58.699 [2024-04-24 21:35:24.069112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.069345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.699 [2024-04-24 21:35:24.069374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.069611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.069829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.069857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.070093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.070395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.070459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.070717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.070899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.070929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.071162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.071448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.071500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.071735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.071961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.072013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.072235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.072624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.072687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.072896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.073188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.073253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.073466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.073681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.073707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.073905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.074158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.074206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.074434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.074612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.074648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.074857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.075058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.075086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.075294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.075478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.075505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.075737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.075970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.076021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.076225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.076430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.076459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.076656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.076841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.076869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.077102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.077313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.077342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.077548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.077751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.077780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.077952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.078171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.078217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.078499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.078765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.078792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.079007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.079218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.079247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.079425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.079640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.079670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.079882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.080062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.080088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.080334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.080517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.080545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.080747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.081045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.081105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.081347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.081578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.081606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.081846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.082196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.082240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.082597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.082832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.082860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.083100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.083412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.083472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.700 qpair failed and we were unable to recover it. 00:20:58.700 [2024-04-24 21:35:24.083677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.700 [2024-04-24 21:35:24.083886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.083915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.084149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.084349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.084377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.084582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.084831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.084858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.085075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.085237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.085263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.085451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.085664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.085694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.085901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.086279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.086332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.086721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.086927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.086955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.087194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.087430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.087495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.087734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.087913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.087942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.088174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.088383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.088411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.088615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.088794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.088827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.089041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.089274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.089302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.089508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.089678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.089722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.089932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.090104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.090132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.090336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.090577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.090605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.090795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.091003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.091031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.091275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.091454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.091481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.091721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.091950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.091978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.092185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.092384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.092412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.092618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.092826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.092853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.093031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.093232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.093266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.093451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.093696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.093724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.093926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.094129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.094157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.094371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.094576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.094604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.094827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.095032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.095060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.095264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.095481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.095508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.095717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.095920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.095948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.096152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.096354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.096382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.096581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.096773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.096798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.701 qpair failed and we were unable to recover it. 00:20:58.701 [2024-04-24 21:35:24.096987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.701 [2024-04-24 21:35:24.097220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.097248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.097501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.097715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.097741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.097930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.098140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.098168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.098404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.098619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.098650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.098852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.099065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.099097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.099314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.099518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.099546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.099740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.099916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.099944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.100113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.100288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.100315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.100545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.100787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.100815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.101057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.101293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.101320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.101553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.101753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.101782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.101966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.102168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.102196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.102434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.102653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.102682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.102858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.103107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.103131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.103341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.103536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.103564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.103784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.103991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.104018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.104233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.104395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.104420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.104656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.104858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.104886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.105073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.105274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.105302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.105540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.105745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.105773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.105980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.106193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.106221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.106429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.106600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.106633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.106837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.107060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.107087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.702 [2024-04-24 21:35:24.107269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.107447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.702 [2024-04-24 21:35:24.107474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.702 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.107666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.107870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.107898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.108063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.108263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.108291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.108500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.108732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.108761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.108944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.109168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.109202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.109409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.109590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.109618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.109837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.110049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.110089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.110328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.110563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.110591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.110827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.110989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.111015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.111202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.111387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.111426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.111661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.111876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.111901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.112053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.112236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.112263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.112511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.112716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.112742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.112967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.113143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.113170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.113357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.113598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.113625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.113831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.113991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.114015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.114205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.114414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.114441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.114633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.114794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.114838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.115048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.115250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.115278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.115493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.115723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.115757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.115959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.116164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.116192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.116416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.116581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.116606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.116800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.117007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.117035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.117237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.117481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.117508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.117751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.117922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.117950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.118187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.118385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.118420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.118653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.118888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.118914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.119104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.119307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.119335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.119535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.119777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.119802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.703 [2024-04-24 21:35:24.119992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.120175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.703 [2024-04-24 21:35:24.120199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.703 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.120446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.120679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.120707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.120889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.121128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.121156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.121355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.121589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.121616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.121838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.122043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.122070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.122297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.122644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.122693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.122898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.123128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.123156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.123390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.123584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.123609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.123793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.123987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.124013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.124206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.124451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.124479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.124686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.124895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.124934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.125122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.125326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.125354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.125536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.125728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.125754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.125967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.126167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.126195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.126392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.126586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.126613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.126804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.127036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.127064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.127300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.127507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.127537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.127758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.127928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.127953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.128150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.128400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.128424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.128587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.128780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.128805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.128969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.129182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.129209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.129447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.129647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.129689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.129879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.130071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.130102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.130325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.130554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.130582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.130807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.130990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.131014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.131211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.131469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.131496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.131678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.131885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.131912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.132120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.132371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.132397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.132602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.132788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.132813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.133030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.133262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.704 [2024-04-24 21:35:24.133290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.704 qpair failed and we were unable to recover it. 00:20:58.704 [2024-04-24 21:35:24.133495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.133698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.133726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.133936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.134144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.134172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.134371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.134590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.134618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.134839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.135008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.135036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.135249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.135425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.135452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.135660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.135863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.135891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.136135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.136347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.136375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.136548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.136781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.136809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.137015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.137195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.137223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.137394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.137571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.137599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.137816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.138037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.138064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.138303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.138531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.138563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.138781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.138970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.138995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.139172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.139360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.139390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.139635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.139825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.139853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.140100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.140302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.140332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.140515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.140731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.140759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.140969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.141170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.141197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.141412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.141600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.141624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.141809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.142017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.142045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.142261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.142466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.142493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.142693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.142878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.142906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.143147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.143384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.143409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.143641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.143831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.143859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.144087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.144289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.144317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.144525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.144755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.144784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.144963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.145176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.145204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.145375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.145579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.145606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.145858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.146078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.705 [2024-04-24 21:35:24.146105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.705 qpair failed and we were unable to recover it. 00:20:58.705 [2024-04-24 21:35:24.146341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.146515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.146543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.146793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.146998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.147023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.147192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.147429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.147457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.147665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.147866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.147894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.148108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.148337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.148364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.148576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.148803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.148828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.149070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.149299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.149327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.149523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.149741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.149766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.149968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.150206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.150230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.150441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.150650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.150679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.150889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.151094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.151122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.151325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.151530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.151558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.151762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.151964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.151992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.152223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.152396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.152421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.152612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.152836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.152861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.153025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.153237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.153262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.153561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.153823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.153849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.154036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.154202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.154226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.154420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.154659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.154701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.154891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.155088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.155112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.155317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.155545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.155573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.155766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.155952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.155977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.156183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.156395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.156420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.156604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.156775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.156800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.157031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.157231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.157258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.157458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.157611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.157640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.157880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.158081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.158109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.158281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.158515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.158564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.706 qpair failed and we were unable to recover it. 00:20:58.706 [2024-04-24 21:35:24.158767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.159000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.706 [2024-04-24 21:35:24.159028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.159236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.159438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.159465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.159674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.159864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.159888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.160128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.160436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.160493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.160732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.160896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.160939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.161182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.161390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.161422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.161654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.161893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.161921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.162127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.162336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.162361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.162598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.162812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.162837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.163022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.163258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.163285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.163552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.163812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.163837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.164018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.164195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.164220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.164442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.164649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.164690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.164901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.165074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.165098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.165284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.165441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.165467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.165689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.165851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.165877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.166119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.166350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.166377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.166638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.166828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.166855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.167065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.167267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.167294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.167512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.167689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.167718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.167891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.168098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.168125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.168343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.168579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.168606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.168824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.169033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.169070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.169304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.169480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.169507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.707 qpair failed and we were unable to recover it. 00:20:58.707 [2024-04-24 21:35:24.169743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.707 [2024-04-24 21:35:24.169909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.169934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.170147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.170356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.170383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.170589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.170797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.170822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.171030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.171232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.171261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.171505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.171738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.171763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.171974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.172179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.172207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.172412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.172644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.172672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.172884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.173094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.173121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.173326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.173532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.173559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.173764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.173951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.173992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.174195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.174396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.174424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.174664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.174851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.174876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.175105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.175300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.175328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.175500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.175717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.175746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.175988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.176224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.176252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.176426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.176657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.176685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.176939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.177139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.177166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.177340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.177566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.177594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.177816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.178049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.178076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.178265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.178438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.178466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.178670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.178874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.178902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.179112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.179294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.179321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.179532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.179715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.179741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.179953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.180136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.180163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.180377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.180584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.180609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.180777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.180956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.180980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.181175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.181331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.181356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.181540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.181716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.181741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.181892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.182113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.182137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.708 qpair failed and we were unable to recover it. 00:20:58.708 [2024-04-24 21:35:24.182311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.708 [2024-04-24 21:35:24.182492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.182518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.182736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.182890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.182915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.183113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.183285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.183312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.183519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.183720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.183750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.183917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.184134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.184159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.184355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.184544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.184569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.184751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.184912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.184938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.185152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.185380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.185407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.185602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.185768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.185794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.185982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.186826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.186856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.187098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.187279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.187304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.187476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.187667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.187693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.187875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.188064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.188091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.188277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.188482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.188527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.188697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.188881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.188910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.189103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.189319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.189347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.189529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.189708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.189734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.189955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.190158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.190186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.190404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.190592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.190617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.190821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.190997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.191024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.191238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.191411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.191440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.191658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.191824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.191865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.192077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.192264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.192292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.192486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.192680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.192709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.192928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.193859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.193908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.194128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.194881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.194913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.195126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.195355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.195383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.195591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.195784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.195810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.196007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.196250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.196277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.709 qpair failed and we were unable to recover it. 00:20:58.709 [2024-04-24 21:35:24.196490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.709 [2024-04-24 21:35:24.196681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.196707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.196869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.197074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.197097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.197348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.197570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.197597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.197823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.198004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.198031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.198266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.198432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.198460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.198684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.198897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.198925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.199133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.199371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.199398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.199571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.199785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.199810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.200050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.200284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.200312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.200544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.200730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.200755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.200960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.201164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.201191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.201410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.201672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.201700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.201880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.202042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.202085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.202298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.202486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.202529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.202783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.202950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.202977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.203165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.203395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.203422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.203662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.203848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.203873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.204098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.204288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.204316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.204517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.204729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.204754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.204906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.205101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.205129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.205350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.205609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.205642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.205847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.206165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.206192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.206391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.206581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.206608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.206811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.206973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.207000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.207332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.207516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.207541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.207727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.207942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.207969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.208179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.208383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.208411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.208633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.208818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.208843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.209093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.209293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.209321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.710 qpair failed and we were unable to recover it. 00:20:58.710 [2024-04-24 21:35:24.209552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.209731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.710 [2024-04-24 21:35:24.209774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.210005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.210198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.210225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.210434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.210600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.210649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.210863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.211051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.211078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.211313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.211484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.211511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.211746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.211932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.211960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.212195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.212399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.212428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.212638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.212833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.212861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.213065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.213292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.213320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.213485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.213664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.213704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.213908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.214105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.214132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.214325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.214527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.214555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.214786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.214980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.215007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.215180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.215375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.215403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.215633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.215831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.215860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.216075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.216252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.216277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.216432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.216620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.216651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.216837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.217044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.217071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.217282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.217464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.217489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.217678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.217867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.217891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.218156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.218363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.218390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.218602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.218780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.218809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.218977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.219179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.219207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.219386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.219572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.219598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.219762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.219909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.219934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.220184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.220386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.220426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.220579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.220773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.220798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.220952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.221142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.221166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.221328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.221514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.221538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.711 qpair failed and we were unable to recover it. 00:20:58.711 [2024-04-24 21:35:24.221725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.711 [2024-04-24 21:35:24.221874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.221899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.222083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.222266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.222291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.222475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.222698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.222723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.222897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.223146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.223170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.223354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.223514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.223539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.223737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.223922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.223947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.224129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.224318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.224343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.224546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.224768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.224793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.224958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.225149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.225174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.225362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.225550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.225576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.225743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.225949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.225976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.226240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.226423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.226448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.226663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.226867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.226894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.227102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.227252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.227277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.227460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.227674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.227708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.227892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.228106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.228134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.228375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.228555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.228579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.228795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.228966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.228992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.229147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.229316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.229342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.229555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.229743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.229769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.229956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.230135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.230160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.230390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.230615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.230644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.230812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.230967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.230992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.231202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.231440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.231465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.231655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.231866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.712 [2024-04-24 21:35:24.231890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.712 qpair failed and we were unable to recover it. 00:20:58.712 [2024-04-24 21:35:24.232083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.232259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.232287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.232493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.232734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.232762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.232941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.233178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.233203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.233407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.233607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.233647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.233859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.234028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.234054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.234233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.234389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.234414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.234593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.234787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.234812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.235042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.235227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.235252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.235483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.235728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.235754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.235966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.236173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.236200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.236408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.236609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.236642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.236829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.237037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.237061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.237250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.237480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.237507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.237687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.237897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.237924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.238129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.238357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.238384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.238591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.238791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.238818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.238982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.239214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.239242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.239442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.239623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.239654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.239892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.240122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.240149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.240383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.240621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.240666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.240910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.241094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.241121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.241298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.241488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.241512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.241666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.241852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.241877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.242062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.242265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.242293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.242537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.242751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.242776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.242932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.243136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.243164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.243392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.243623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.243656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.243840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.244045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.244072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.244270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.244472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.244499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.713 qpair failed and we were unable to recover it. 00:20:58.713 [2024-04-24 21:35:24.244717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.713 [2024-04-24 21:35:24.244902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.244930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.245107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.245333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.245361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.245564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.245800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.245829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.246062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.246264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.246292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.246519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.246735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.246763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.246983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.247189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.247217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.247386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.247614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.247707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.247922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.248099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.248126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.248329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.248530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.248558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.248797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.248999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.249027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.249198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.249403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.249430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.249607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.249794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.249822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.250037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.250270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.250297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.250511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.250711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.250739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.250935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.251107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.251134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.251340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.251520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.251548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.251727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.251894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.251919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.252126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.252331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.252358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.252537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.252738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.252766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.252944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.253125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.253153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.253365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.253601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.253633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.253863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.254061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.254089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.254317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.254529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.254555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.254768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.254973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.255001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.255241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.255445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.255473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.255679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.255882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.255915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.256117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.256323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.256351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.256509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.256689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.256718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.256901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.257083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.257107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.714 qpair failed and we were unable to recover it. 00:20:58.714 [2024-04-24 21:35:24.257344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.714 [2024-04-24 21:35:24.257517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.257544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.257720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.257953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.257981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.258187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.258412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.258439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.258673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.258884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.258913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.259110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.259335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.259363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.259572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.259808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.259837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.260051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.260259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.260293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.260512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.260787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.260815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.261023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.261202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.261229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.261411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.261645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.261671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.261839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.262004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.262045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.262277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.262478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.262506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.262712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.262914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.262941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.263170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.263375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.263402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.263634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.263802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.263829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.264021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.264232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.264260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.264497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.264670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.264698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.264911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.265150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.265178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.265408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.265584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.265613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.265848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.266054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.266082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.266259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.266489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.266516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.266757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.266986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.267013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.267222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.267421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.267449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.267624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.267829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.267857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.268056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.268282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.268310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.268517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.268724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.268751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.268959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.269159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.269186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.269379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.269563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.269587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.269804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.270009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.270037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.715 qpair failed and we were unable to recover it. 00:20:58.715 [2024-04-24 21:35:24.270238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.270432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.715 [2024-04-24 21:35:24.270460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.270672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.270865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.270893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.271071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.271310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.271337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.271546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.271729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.271754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.271906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.272151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.272178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.272391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.272639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.272668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.272895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.273096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.273123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.273351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.273554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.273581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.273769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.273974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.274001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.274205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.274435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.274462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.274687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.274875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.274900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.275083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.275280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.275308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.275478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.275659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.275687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.275872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.276070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.276097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.276266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.276464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.276491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.276692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.276920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.276948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.277134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.277362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.277389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.277641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.277859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.277884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.278072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.278232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.278258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.278419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.278620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.278654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.278861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.279060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.279087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.279290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.279491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.279518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.279720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.279922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.279949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.280145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.280371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.280399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.280599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.280844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.716 [2024-04-24 21:35:24.280872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.716 qpair failed and we were unable to recover it. 00:20:58.716 [2024-04-24 21:35:24.281045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.281252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.281280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.281489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.281692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.281721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.281950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.282121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.282148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.282309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.282512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.282544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.282746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.282953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.282981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.283184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.283354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.283382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.283617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.283785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.283810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.284029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.284257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.284285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.284491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.284674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.284700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.284861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.285060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.285088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.285266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.285475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.285504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.285718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.285911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.285936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.286149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.286328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.286357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.286594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.286765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.286790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.286998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.287167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.287194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.287393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.287586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.287612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.287831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.288024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.288051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.288255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.288461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.288490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.288724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.288925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.288954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.289185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.289391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.289419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.289587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.289773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.289801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.290005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.290217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.290246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.290484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.290658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.290688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.290919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.291150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.291177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.291377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.291577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.291604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.291790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.291996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.292025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.292229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.292431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.292459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.292640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.292867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.292895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.717 [2024-04-24 21:35:24.293128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.293314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.717 [2024-04-24 21:35:24.293342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.717 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.293524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.293712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.293738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.293920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.294156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.294183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.294417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.294596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.294625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.294864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.295068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.295096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.295330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.295535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.295562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.295749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.295949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.295976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.296183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.296361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.296389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.296589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.296831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.296856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.297071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.297276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.297304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.297503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.297711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.297739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.297921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.298150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.298178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.298422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.298639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.298666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.298859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.299102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.299129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.299363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.299567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.299591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.299780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.299983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.300010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.300190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.300398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.300426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.300632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.300866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.300893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.301122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.301325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.301354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.301563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.301766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.301794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.301972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.302179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.302207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.302444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.302624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.302656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.302843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.303071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.303098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.303282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.303518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.303545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.303757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.303960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.303988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.304198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.304357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.304382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.304609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.304839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.304870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.305052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.305251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.305277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.305482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.305688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.305717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.305920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.306119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.718 [2024-04-24 21:35:24.306146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.718 qpair failed and we were unable to recover it. 00:20:58.718 [2024-04-24 21:35:24.306349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.306554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.306582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.306766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.306994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.307021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.307223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.307452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.307480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.307667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.307829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.307854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.308065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.308278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.308307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.308537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.308734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.308762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.308999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.309209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.309236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.309451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.309666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.309692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.309879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.310089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.310114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.310356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.310562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.310590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.310773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.311005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.311032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.311257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.311470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.311499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.311736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.311951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.311976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.312140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.312384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.312409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.312603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.312817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.312844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.313059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.313243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.313267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.313452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.313688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.313717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.313927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.314107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.314134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.314339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.314567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.314594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.314810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.315012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.315039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.315241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.315468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.315495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.315714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.315900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.315943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.316146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.316381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.316408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.316616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.316860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.316888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.317087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.317288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.317315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.317494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.317722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.317750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.317986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.318191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.318218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.318453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.318636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.318665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.318843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.319077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.719 [2024-04-24 21:35:24.319105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.719 qpair failed and we were unable to recover it. 00:20:58.719 [2024-04-24 21:35:24.319334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.319533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.319561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.319769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.319986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.320010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.320196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.320431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.320459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.320664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.320894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.320921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.321148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.321329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.321356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.321558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.321787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.321815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.322046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.322223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.322251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.322485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.322713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.322741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.322974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.323158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.323186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.323413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.323615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.323648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.323885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.324044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.324069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.324270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.324450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.324479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.324651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.324855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.324882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.325098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.325298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.325326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.325527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.325734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.325760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.325971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.326143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.326170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.326342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.326545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.326570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.326776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.326982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.327011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.327218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.327451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.327485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.327719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.327895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.327923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.328127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.328330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.328358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.328565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.328771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.328800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.328991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.329152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.329177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.329353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.329572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.329596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.329852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.330054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.330081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.330264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.330468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.330493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.330675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.330923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.330951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.331191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.331373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.331398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.331608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.331861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.331895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.720 qpair failed and we were unable to recover it. 00:20:58.720 [2024-04-24 21:35:24.332060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.720 [2024-04-24 21:35:24.332264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.332291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.332517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.332748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.332776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.332955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.333168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.333192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.333352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.333564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.333592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.333833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.334001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.334026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.334213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.334417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.334445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.334689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.334903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.334945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.335150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.335376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.335404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.335641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.335858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.335883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.336092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.336297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.336324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.336530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.336765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.336793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.337001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.337246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.337274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.337440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.337613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.337659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.337832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.338044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.338072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.338272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.338510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.338537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.338727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.338935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.338962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.339173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.339352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.339380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.339607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.339827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.339855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.340092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.340295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.340322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.340555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.340753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.340782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.340990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.341167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.341194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.341371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.341542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.341569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.341753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.341946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.341973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.342173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.342373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.342401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.721 [2024-04-24 21:35:24.342644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.342862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.721 [2024-04-24 21:35:24.342889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.721 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.343132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.343333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.343361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.343529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.343743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.343768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.343941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.344143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.344170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.344377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.344554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.344581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.344767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.344984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.345012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.345229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.345427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.345455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.345656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.345839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.345867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.346106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.346311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.346340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.346571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.346737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.346763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.346964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.347200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.347224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.347438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.347649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.347677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.347849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.348052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.348079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.348271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.348477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.348506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.348731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.348958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.348986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.349191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.349400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.349427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.349639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.349819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.349852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.350058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.350221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.350245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.350458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.350645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.350671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.350866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.351063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.351090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.351298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.351525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.351552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.351769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.352044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.352069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.352309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.352512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.352540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.352752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.352939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.352965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.353171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.353399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.353426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.353668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.353875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.353902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.354144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.354372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.354404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.354619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.354829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.354857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.355069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.355274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.355302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.722 qpair failed and we were unable to recover it. 00:20:58.722 [2024-04-24 21:35:24.355510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.722 [2024-04-24 21:35:24.355763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.355792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.723 qpair failed and we were unable to recover it. 00:20:58.723 [2024-04-24 21:35:24.356013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.356229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.356254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.723 qpair failed and we were unable to recover it. 00:20:58.723 [2024-04-24 21:35:24.356462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.356644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.356671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.723 qpair failed and we were unable to recover it. 00:20:58.723 [2024-04-24 21:35:24.356851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.357066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.357093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.723 qpair failed and we were unable to recover it. 00:20:58.723 [2024-04-24 21:35:24.357322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.357568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.357607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:58.723 qpair failed and we were unable to recover it. 00:20:58.723 Read completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Read completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Read completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Read completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Read completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Read completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Read completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Read completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Read completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Read completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Write completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Write completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Write completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Read completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Read completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Write completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Read completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Write completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Write completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Write completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Read completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Write completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Write completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Write completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Read completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Write completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Read completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Write completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Read completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Read completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Read completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 Read completed with error (sct=0, sc=8) 00:20:58.723 starting I/O failed 00:20:58.723 [2024-04-24 21:35:24.358002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:58.723 [2024-04-24 21:35:24.358232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.358479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.358510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.723 qpair failed and we were unable to recover it. 00:20:58.723 [2024-04-24 21:35:24.358713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.358902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.358944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.723 qpair failed and we were unable to recover it. 00:20:58.723 [2024-04-24 21:35:24.359184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.359427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.359455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.723 qpair failed and we were unable to recover it. 00:20:58.723 [2024-04-24 21:35:24.359651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.359840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.359865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.723 qpair failed and we were unable to recover it. 00:20:58.723 [2024-04-24 21:35:24.360111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.360551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.360601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.723 qpair failed and we were unable to recover it. 00:20:58.723 [2024-04-24 21:35:24.360799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.361006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.361034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.723 qpair failed and we were unable to recover it. 00:20:58.723 [2024-04-24 21:35:24.361272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.361463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.723 [2024-04-24 21:35:24.361488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.723 qpair failed and we were unable to recover it. 00:20:58.723 [2024-04-24 21:35:24.361713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.361921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.361969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.997 qpair failed and we were unable to recover it. 00:20:58.997 [2024-04-24 21:35:24.362164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.362398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.362427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.997 qpair failed and we were unable to recover it. 00:20:58.997 [2024-04-24 21:35:24.362638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.362844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.362869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.997 qpair failed and we were unable to recover it. 00:20:58.997 [2024-04-24 21:35:24.363078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.363341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.363369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.997 qpair failed and we were unable to recover it. 00:20:58.997 [2024-04-24 21:35:24.363577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.363793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.363820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.997 qpair failed and we were unable to recover it. 00:20:58.997 [2024-04-24 21:35:24.364015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.364203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.364228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.997 qpair failed and we were unable to recover it. 00:20:58.997 [2024-04-24 21:35:24.364534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.364808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.364835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.997 qpair failed and we were unable to recover it. 00:20:58.997 [2024-04-24 21:35:24.365029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.365390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.365443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.997 qpair failed and we were unable to recover it. 00:20:58.997 [2024-04-24 21:35:24.365672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.365839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.365864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.997 qpair failed and we were unable to recover it. 00:20:58.997 [2024-04-24 21:35:24.366228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.366532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.366561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.997 qpair failed and we were unable to recover it. 00:20:58.997 [2024-04-24 21:35:24.366787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.367035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.367063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.997 qpair failed and we were unable to recover it. 00:20:58.997 [2024-04-24 21:35:24.367268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.367434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.367459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.997 qpair failed and we were unable to recover it. 00:20:58.997 [2024-04-24 21:35:24.367657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.367842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.367867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.997 qpair failed and we were unable to recover it. 00:20:58.997 [2024-04-24 21:35:24.368081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.368323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.368350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.997 qpair failed and we were unable to recover it. 00:20:58.997 [2024-04-24 21:35:24.368569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.368749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.368776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.997 qpair failed and we were unable to recover it. 00:20:58.997 [2024-04-24 21:35:24.368967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.369199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.369228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.997 qpair failed and we were unable to recover it. 00:20:58.997 [2024-04-24 21:35:24.369457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.369704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.369730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.997 qpair failed and we were unable to recover it. 00:20:58.997 [2024-04-24 21:35:24.369967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.370195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.370223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.997 qpair failed and we were unable to recover it. 00:20:58.997 [2024-04-24 21:35:24.370400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.370640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.370668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.997 qpair failed and we were unable to recover it. 00:20:58.997 [2024-04-24 21:35:24.370866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.371061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.371086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.997 qpair failed and we were unable to recover it. 00:20:58.997 [2024-04-24 21:35:24.371325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.371526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.371554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.997 qpair failed and we were unable to recover it. 00:20:58.997 [2024-04-24 21:35:24.371756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.371918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.997 [2024-04-24 21:35:24.371960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.997 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.372183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.372377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.372405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.372639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.372823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.372848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.373065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.373300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.373325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.373537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.373738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.373767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.373947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.374176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.374205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.374445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.374654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.374683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.374920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.375145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.375170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.375355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.375593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.375618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.375790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.375979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.376004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.376207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.376383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.376411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.376650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.376825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.376853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.377061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.377248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.377274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.377499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.377741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.377770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.377943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.378171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.378199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.378426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.378605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.378639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.378846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.379057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.379084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.379256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.379468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.379496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.379711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.379947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.379975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.380188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.380392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.380419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.380653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.380830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.380860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.381032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.381237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.381264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.381505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.381746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.381775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.381953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.382182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.382210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.382379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.382586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.382614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.382821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.383026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.383055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.383263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.383453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.383478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.383667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.383821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.383863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.384069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.384277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.998 [2024-04-24 21:35:24.384305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.998 qpair failed and we were unable to recover it. 00:20:58.998 [2024-04-24 21:35:24.384535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.384755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.384784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.385001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.385209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.385237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.385445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.385674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.385702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.385908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.386108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.386136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.386312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.386491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.386518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.386730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.386967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.386996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.387201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.387434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.387460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.387677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.387921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.387948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.388150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.388326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.388356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.388590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.388833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.388862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.389100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.389282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.389307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.389506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.389717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.389748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.389950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.390177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.390205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.390422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.390624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.390659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.390891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.391073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.391103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.391281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.391513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.391540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.391708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.391887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.391917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.392154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.392364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.392391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.392569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.392811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.392839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.393007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.393202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.393230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.393441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.393612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.393648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.393848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.394053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.394080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.394278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.394460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.394489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.394716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.394920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.394948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.395178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.395375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.395403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.395603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.395810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.395838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.396041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.396248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.396276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.396478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.396660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.999 [2024-04-24 21:35:24.396690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:58.999 qpair failed and we were unable to recover it. 00:20:58.999 [2024-04-24 21:35:24.396900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.397109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.397137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.397316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.397506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.397532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.397749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.397945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.397973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.398171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.398373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.398401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.398606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.398809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.398838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.399083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.399320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.399348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.399550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.399759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.399788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.400022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.400231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.400259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.400470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.400680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.400708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.400899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.401185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.401214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.401389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.401623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.401660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.401862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.402066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.402093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.402305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.402488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.402513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.402700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.402935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.402963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.403167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.403352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.403380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.403554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.403771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.403800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.403998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.404202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.404229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.404578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.404836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.404864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.405081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.405286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.405314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.405546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.405763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.405792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.406006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.406209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.406237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.406439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.406673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.406700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.406924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.407139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.407167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.407374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.407559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.407587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.407811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.408038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.408064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.408319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.408535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.408565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.408810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.409008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.409037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.409245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.409438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.409463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.409646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.409881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.000 [2024-04-24 21:35:24.409909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.000 qpair failed and we were unable to recover it. 00:20:59.000 [2024-04-24 21:35:24.410107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.410302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.410331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.001 qpair failed and we were unable to recover it. 00:20:59.001 [2024-04-24 21:35:24.410559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.410734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.410762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.001 qpair failed and we were unable to recover it. 00:20:59.001 [2024-04-24 21:35:24.410967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.411174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.411204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.001 qpair failed and we were unable to recover it. 00:20:59.001 [2024-04-24 21:35:24.411413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.411640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.411675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.001 qpair failed and we were unable to recover it. 00:20:59.001 [2024-04-24 21:35:24.411915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.412115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.412144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.001 qpair failed and we were unable to recover it. 00:20:59.001 [2024-04-24 21:35:24.412349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.412579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.412607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.001 qpair failed and we were unable to recover it. 00:20:59.001 [2024-04-24 21:35:24.412828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.413058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.413087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.001 qpair failed and we were unable to recover it. 00:20:59.001 [2024-04-24 21:35:24.413283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.413481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.413509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.001 qpair failed and we were unable to recover it. 00:20:59.001 [2024-04-24 21:35:24.413766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.413928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.413970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.001 qpair failed and we were unable to recover it. 00:20:59.001 [2024-04-24 21:35:24.414207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.414412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.414441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.001 qpair failed and we were unable to recover it. 00:20:59.001 [2024-04-24 21:35:24.414680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.414916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.414944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.001 qpair failed and we were unable to recover it. 00:20:59.001 [2024-04-24 21:35:24.415157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.415329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.415357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.001 qpair failed and we were unable to recover it. 00:20:59.001 [2024-04-24 21:35:24.415595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.415826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.415854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.001 qpair failed and we were unable to recover it. 00:20:59.001 [2024-04-24 21:35:24.416091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.416306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.416335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.001 qpair failed and we were unable to recover it. 00:20:59.001 [2024-04-24 21:35:24.416580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.416799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.416833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.001 qpair failed and we were unable to recover it. 00:20:59.001 [2024-04-24 21:35:24.417024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.417245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.417270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.001 qpair failed and we were unable to recover it. 00:20:59.001 [2024-04-24 21:35:24.417425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.417582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.417606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.001 qpair failed and we were unable to recover it. 00:20:59.001 [2024-04-24 21:35:24.417851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.418027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.418055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.001 qpair failed and we were unable to recover it. 00:20:59.001 [2024-04-24 21:35:24.418259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.418434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.418462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.001 qpair failed and we were unable to recover it. 00:20:59.001 [2024-04-24 21:35:24.418676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.418892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.418918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.001 qpair failed and we were unable to recover it. 00:20:59.001 [2024-04-24 21:35:24.419208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.419645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.419694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.001 qpair failed and we were unable to recover it. 00:20:59.001 [2024-04-24 21:35:24.419895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.420183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.001 [2024-04-24 21:35:24.420208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.001 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.420412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.420598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.420635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.420860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.421078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.421106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.421343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.421557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.421592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.421799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.422032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.422060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.422286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.422518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.422546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.422726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.422964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.422992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.423255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.423484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.423512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.423718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.423941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.423965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.424184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.424418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.424446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.424681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.424846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.424875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.425097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.425345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.425373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.425552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.425752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.425782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.425997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.426199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.426231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.426476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.426675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.426703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.426966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.427221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.427249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.427479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.427687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.427716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.427923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.428135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.428163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.428391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.428589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.428617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.428840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.429018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.429046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.429260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.429457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.429487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.429720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.429947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.429972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.430200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.430393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.430421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.430677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.430890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.430925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.431144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.431322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.431347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.431531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.431761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.431787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.002 [2024-04-24 21:35:24.431945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.432102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.002 [2024-04-24 21:35:24.432127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.002 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.432337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.432547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.432572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.432777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.432962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.432989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.433204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.433403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.433430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.433672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.433857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.433882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.434070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.434280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.434305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.434498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.434709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.434736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.434928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.435084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.435109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.435309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.435521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.435547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.435790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.435976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.436002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.436192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.436378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.436403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.436616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.436867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.436892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.437092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.437249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.437275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.437483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.437674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.437700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.437903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.438115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.438154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.438357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.438547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.438572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.438757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.439016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.439044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.439248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.439451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.439493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.439691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.439856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.439882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.440093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.440284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.440309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.440522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.440759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.440786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.440942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.441148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.441176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.441411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.441570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.441596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.441833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.442040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.442070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.442278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.442467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.003 [2024-04-24 21:35:24.442492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.003 qpair failed and we were unable to recover it. 00:20:59.003 [2024-04-24 21:35:24.442738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.442951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.442976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.443161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.443345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.443370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.443554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.443719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.443745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.443939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.444150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.444175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.444357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.444568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.444593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.444787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.444971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.444996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.445157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.445371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.445400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.445587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.445809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.445836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.446025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.446218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.446244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.446454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.446645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.446671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.446863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.447022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.447047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.447265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.447503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.447531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.447773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.447937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.447962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.448176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.448425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.448450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.448639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.448830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.448856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.449030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.449214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.449240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.449403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.449588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.449614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.449796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.449956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.449983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.450165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.450354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.450379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.450592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.450790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.450816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.451002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.451185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.451211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.451419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.451655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.451684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.451914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.452101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.452126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.452311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.452499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.452523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.452690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.452876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.452901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.453120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.453303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.453328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.453540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.453781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.453810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.454042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.454259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.454284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.004 qpair failed and we were unable to recover it. 00:20:59.004 [2024-04-24 21:35:24.454519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.004 [2024-04-24 21:35:24.454692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.454720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.005 qpair failed and we were unable to recover it. 00:20:59.005 [2024-04-24 21:35:24.454916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.455124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.455149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.005 qpair failed and we were unable to recover it. 00:20:59.005 [2024-04-24 21:35:24.455334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.455537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.455565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.005 qpair failed and we were unable to recover it. 00:20:59.005 [2024-04-24 21:35:24.455756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.455939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.455964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.005 qpair failed and we were unable to recover it. 00:20:59.005 [2024-04-24 21:35:24.456141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.456343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.456370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.005 qpair failed and we were unable to recover it. 00:20:59.005 [2024-04-24 21:35:24.456579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.456801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.456830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.005 qpair failed and we were unable to recover it. 00:20:59.005 [2024-04-24 21:35:24.457045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.457284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.457308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.005 qpair failed and we were unable to recover it. 00:20:59.005 [2024-04-24 21:35:24.457491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.457704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.457730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.005 qpair failed and we were unable to recover it. 00:20:59.005 [2024-04-24 21:35:24.457888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.458068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.458093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.005 qpair failed and we were unable to recover it. 00:20:59.005 [2024-04-24 21:35:24.458305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.458514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.458542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.005 qpair failed and we were unable to recover it. 00:20:59.005 [2024-04-24 21:35:24.458736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.458919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.458944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.005 qpair failed and we were unable to recover it. 00:20:59.005 [2024-04-24 21:35:24.459120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.459316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.459341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.005 qpair failed and we were unable to recover it. 00:20:59.005 [2024-04-24 21:35:24.459499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.459696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.459723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.005 qpair failed and we were unable to recover it. 00:20:59.005 [2024-04-24 21:35:24.459912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.460122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.460150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.005 qpair failed and we were unable to recover it. 00:20:59.005 [2024-04-24 21:35:24.460353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.460561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.460591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.005 qpair failed and we were unable to recover it. 00:20:59.005 [2024-04-24 21:35:24.460846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.461039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.461064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.005 qpair failed and we were unable to recover it. 00:20:59.005 [2024-04-24 21:35:24.461256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.461441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.461466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.005 qpair failed and we were unable to recover it. 00:20:59.005 [2024-04-24 21:35:24.461654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.461856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.461884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.005 qpair failed and we were unable to recover it. 00:20:59.005 [2024-04-24 21:35:24.462089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.462325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.462350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.005 qpair failed and we were unable to recover it. 00:20:59.005 [2024-04-24 21:35:24.462599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.462827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.462853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.005 qpair failed and we were unable to recover it. 00:20:59.005 [2024-04-24 21:35:24.463041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.463203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.463233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.005 qpair failed and we were unable to recover it. 00:20:59.005 [2024-04-24 21:35:24.463432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.463620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.463659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.005 qpair failed and we were unable to recover it. 00:20:59.005 [2024-04-24 21:35:24.463838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.005 [2024-04-24 21:35:24.464025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.464051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.464274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.464494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.464519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.464698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.464892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.464917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.465112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.465305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.465331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.465546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.465724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.465754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.466009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.466195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.466220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.466373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.466570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.466597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.466842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.466995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.467021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.467232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.467416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.467441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.467622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.467793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.467838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.468034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.468237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.468265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.468431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.468643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.468686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.468882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.469066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.469091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.469307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.469569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.469594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.469784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.469991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.470016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.470240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.470404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.470429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.470589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.470749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.470791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.471021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.471183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.471211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.471419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.471600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.471625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.471801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.471987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.472014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.472191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.472403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.472428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.472659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.472858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.472884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.473041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.473222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.473247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.473431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.473637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.473663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.473853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.474012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.474039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.006 qpair failed and we were unable to recover it. 00:20:59.006 [2024-04-24 21:35:24.474269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.474515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.006 [2024-04-24 21:35:24.474543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.007 qpair failed and we were unable to recover it. 00:20:59.007 [2024-04-24 21:35:24.474750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.474970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.474995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.007 qpair failed and we were unable to recover it. 00:20:59.007 [2024-04-24 21:35:24.475189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.475373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.475398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.007 qpair failed and we were unable to recover it. 00:20:59.007 [2024-04-24 21:35:24.475606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.475836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.475861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.007 qpair failed and we were unable to recover it. 00:20:59.007 [2024-04-24 21:35:24.476060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.476245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.476270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.007 qpair failed and we were unable to recover it. 00:20:59.007 [2024-04-24 21:35:24.476519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.476709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.476739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.007 qpair failed and we were unable to recover it. 00:20:59.007 [2024-04-24 21:35:24.476987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.477169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.477194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.007 qpair failed and we were unable to recover it. 00:20:59.007 [2024-04-24 21:35:24.477357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.477556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.477584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.007 qpair failed and we were unable to recover it. 00:20:59.007 [2024-04-24 21:35:24.477803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.477991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.478017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.007 qpair failed and we were unable to recover it. 00:20:59.007 [2024-04-24 21:35:24.478200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.478356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.478381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.007 qpair failed and we were unable to recover it. 00:20:59.007 [2024-04-24 21:35:24.478563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.478724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.478752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.007 qpair failed and we were unable to recover it. 00:20:59.007 [2024-04-24 21:35:24.478987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.479223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.479248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.007 qpair failed and we were unable to recover it. 00:20:59.007 [2024-04-24 21:35:24.479432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.479623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.479655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.007 qpair failed and we were unable to recover it. 00:20:59.007 [2024-04-24 21:35:24.479855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.480069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.480094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.007 qpair failed and we were unable to recover it. 00:20:59.007 [2024-04-24 21:35:24.480280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.480454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.480479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.007 qpair failed and we were unable to recover it. 00:20:59.007 [2024-04-24 21:35:24.480724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.480928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.480956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.007 qpair failed and we were unable to recover it. 00:20:59.007 [2024-04-24 21:35:24.481151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.481394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.481419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.007 qpair failed and we were unable to recover it. 00:20:59.007 [2024-04-24 21:35:24.481609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.481798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.481824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.007 qpair failed and we were unable to recover it. 00:20:59.007 [2024-04-24 21:35:24.482013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.482228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.482257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.007 qpair failed and we were unable to recover it. 00:20:59.007 [2024-04-24 21:35:24.482500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.482691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.482719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.007 qpair failed and we were unable to recover it. 00:20:59.007 [2024-04-24 21:35:24.482936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.483145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.483173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.007 qpair failed and we were unable to recover it. 00:20:59.007 [2024-04-24 21:35:24.483362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.007 [2024-04-24 21:35:24.483570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.483598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.483800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.484035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.484064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.484272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.484487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.484512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.484701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.484949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.484977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.485211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.485416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.485444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.485624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.485843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.485870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.486042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.486230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.486255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.486443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.486604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.486645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.486835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.487020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.487046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.487260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.487463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.487491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.487729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.487913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.487938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.488149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.488302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.488345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.488550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.488785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.488811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.488999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.489168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.489194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.489398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.489610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.489643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.489835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.490024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.490049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.490458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.490691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.490721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.490935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.491148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.491179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.491345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.491593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.491619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.491822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.491985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.492010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.492223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.492430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.492458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.492705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.492892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.492917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.493101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.493280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.493304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.493536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.493750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.493776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.493955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.494138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.494163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.494323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.494558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.494586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.494832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.495044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.495069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.495260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.495463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.008 [2024-04-24 21:35:24.495496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.008 qpair failed and we were unable to recover it. 00:20:59.008 [2024-04-24 21:35:24.495706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.495872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.495898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.496080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.496265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.496294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.496530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.496690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.496716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.496932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.497122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.497151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.497341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.497549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.497577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.497796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.497977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.498005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.498185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.498390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.498416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.498636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.498789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.498814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.498974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.499186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.499211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.499398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.499587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.499614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.499819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.500013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.500040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.500227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.500438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.500463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.500618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.500799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.500824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.501035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.501243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.501272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.501519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.501720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.501747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.501915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.502096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.502135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.502364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.502556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.502585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.502780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.502990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.503015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.503250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.503428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.503456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.503686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.503872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.503897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.504090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.504252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.504278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.504478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.504677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.504703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.504913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.505159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.505187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.505358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.505566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.505594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.505794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.505991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.506016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.506198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.506385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.506412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.506625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.506823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.506850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.507049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.507239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.507265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.009 [2024-04-24 21:35:24.507483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.507732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.009 [2024-04-24 21:35:24.507758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.009 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.507939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.508154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.508180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.508343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.508554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.508578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.508830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.509045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.509071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.509260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.509420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.509447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.509661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.509892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.509920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.510128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.510356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.510384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.510566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.510759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.510784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.510940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.511152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.511177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.511368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.511579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.511604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.511824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.512010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.512052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.512283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.512458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.512485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.512665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.512871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.512901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.513103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.513281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.513306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.513498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.513707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.513749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.513948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.514165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.514190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.514408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.514620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.514656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.514865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.515053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.515078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.515322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.515528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.515570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.515789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.515974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.515999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.516184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.516399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.516424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.516607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.516819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.516848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.517101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.517298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.517323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.517516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.517708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.517734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.010 [2024-04-24 21:35:24.517948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.518156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.010 [2024-04-24 21:35:24.518184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.010 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.518355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.518603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.518635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.518793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.518958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.518984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.519216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.519399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.519424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.519690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.519877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.519918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.520098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.520339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.520367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.520556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.520789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.520831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.521023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.521179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.521206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.521434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.521650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.521705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.521910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.522095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.522121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.522392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.522572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.522600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.522801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.523042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.523066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.523288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.523441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.523465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.523637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.523853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.523882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.524133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.524296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.524339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.524542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.524755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.524783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.525027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.525226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.525251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.525451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.525642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.525669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.525888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.526078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.526106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.526344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.526605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.526638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.526861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.527081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.527109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.527337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.527539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.527567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.527803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.527969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.527994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.528185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.528378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.528403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.528605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.528820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.528847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.011 qpair failed and we were unable to recover it. 00:20:59.011 [2024-04-24 21:35:24.529017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.011 [2024-04-24 21:35:24.529204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.529228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.529406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.529606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.529642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.529871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.530054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.530079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.530267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.530424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.530450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.530612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.530809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.530834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.531025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.531188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.531212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.531424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.531644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.531675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.531879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.532070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.532095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.532275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.532442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.532467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.532654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.532814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.532840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.533034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.533239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.533267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.533450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.533694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.533723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.533937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.534125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.534150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.534364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.534556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.534582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.534801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.534988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.535014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.535177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.535331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.535358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.535587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.535829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.535855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.536110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.536294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.536319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.536512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.536730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.536760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.536995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.537182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.537208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.537368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.537573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.537602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.537843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.538005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.538030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.538196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.538374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.538398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.012 qpair failed and we were unable to recover it. 00:20:59.012 [2024-04-24 21:35:24.538585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.538836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.012 [2024-04-24 21:35:24.538865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.539047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.539234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.539259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.539467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.539671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.539711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.539927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.540143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.540168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.540326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.540505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.540530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.540740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.540902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.540946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.541175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.541422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.541447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.541639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.541827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.541852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.542037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.542245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.542270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.542459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.542645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.542676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.542882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.543118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.543147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.543359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.543543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.543570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.543770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.543959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.543984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.544149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.544331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.544356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.544566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.544746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.544772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.544937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.545097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.545123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.545348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.545538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.545563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.545776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.545939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.545964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.546155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.546360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.546385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.546626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.546796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.546822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.546985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.547182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.547207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.547401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.547565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.547592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.547809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.548014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.548042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.548251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.548459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.548487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.548719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.548904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.548929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.549142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.549320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.549345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.549550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.549727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.549768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.549961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.550149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.013 [2024-04-24 21:35:24.550174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.013 qpair failed and we were unable to recover it. 00:20:59.013 [2024-04-24 21:35:24.550331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.550562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.550591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.550804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.551023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.551051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.551280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.551505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.551533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.551726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.551920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.551945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.552119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.552449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.552475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.552736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.552933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.552958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.553367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.553606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.553641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.553854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.554153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.554194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.554458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.554677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.554703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.554953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.555197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.555235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.555455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.555667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.555710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.555917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.556146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.556174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.556400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.556658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.556706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.556921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.557108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.557134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.557316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.557554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.557581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.557804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.557994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.558021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.558269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.558519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.558546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.558754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.558933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.558960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.559189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.559387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.559430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.559607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.559810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.559835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.560053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.560236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.560264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.560504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.560653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.560679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.560843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.561013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.561042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.561333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.561513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.561537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.561729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.561912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.561938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.562130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.562334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.562360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.562513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.562729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.562755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.562988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.563199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.563223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.014 qpair failed and we were unable to recover it. 00:20:59.014 [2024-04-24 21:35:24.563420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.563605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.014 [2024-04-24 21:35:24.563638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.563852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.564059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.564088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.564328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.564557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.564586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.564845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.565022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.565048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.565267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.565422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.565452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.565636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.565854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.565879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.566135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.566358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.566385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.566593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.566781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.566811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.567023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.567235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.567260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.567502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.567724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.567750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.567936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.568103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.568142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.568348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.568565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.568592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.568786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.568969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.568994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.569180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.569422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.569450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.569657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.569856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.569884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.570077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.570312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.570338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.570530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.570708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.570734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.570899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.571082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.571108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.571289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.571510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.571535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.571745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.571951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.571979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.572158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.572357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.572385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.572590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.572853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.572880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.573086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.573270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.573296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.573500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.573722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.573750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.573953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.574114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.574141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.574336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.574541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.574570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.574798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.574981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.575006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.575197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.575406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.575434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.575663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.575824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.575849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.015 qpair failed and we were unable to recover it. 00:20:59.015 [2024-04-24 21:35:24.576056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.015 [2024-04-24 21:35:24.576286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.576314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.016 [2024-04-24 21:35:24.576555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.576739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.576764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.016 [2024-04-24 21:35:24.576946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.577142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.577169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.016 [2024-04-24 21:35:24.577333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.577572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.577600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.016 [2024-04-24 21:35:24.577809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.577971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.577997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.016 [2024-04-24 21:35:24.578227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.578437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.578465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.016 [2024-04-24 21:35:24.578650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.578884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.578912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.016 [2024-04-24 21:35:24.579108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.579287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.579315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.016 [2024-04-24 21:35:24.579524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.579728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.579757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.016 [2024-04-24 21:35:24.579988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.580193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.580221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.016 [2024-04-24 21:35:24.580419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.580625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.580672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.016 [2024-04-24 21:35:24.580892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.581099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.581128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.016 [2024-04-24 21:35:24.581363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.581526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.581554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.016 [2024-04-24 21:35:24.581757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.581985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.582013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.016 [2024-04-24 21:35:24.582245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.582443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.582471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.016 [2024-04-24 21:35:24.582731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.582947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.582977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.016 [2024-04-24 21:35:24.583209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.583440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.583468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.016 [2024-04-24 21:35:24.583676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.583908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.583936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.016 [2024-04-24 21:35:24.584135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.584347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.584375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.016 [2024-04-24 21:35:24.584577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.584741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.584767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.016 [2024-04-24 21:35:24.584959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.585170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.585198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.016 [2024-04-24 21:35:24.585429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.585636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.585664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.016 [2024-04-24 21:35:24.585873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.586055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.016 [2024-04-24 21:35:24.586085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.016 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.586351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.586579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.586607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.586801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.587033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.587058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.587266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.587507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.587535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.587771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.587948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.587977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.588168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.588429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.588458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.588653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.588855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.588885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.589089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.589315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.589339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.589521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.589727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.589752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.590000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.590194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.590220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.590435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.590670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.590698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.590903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.591162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.591187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.591504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.591709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.591739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.591945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.592184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.592212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.592416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.592608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.592644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.592847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.593038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.593066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.593241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.593483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.593511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.593747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.593956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.594000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.594238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.594465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.594493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.594695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.594876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.594903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.595137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.595346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.595373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.595606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.595824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.595852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.596084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.596251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.596279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.596444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.596623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.596671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.596909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.597111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.597137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.597418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.597640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.597669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.597904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.598110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.598139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.598341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.598547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.598575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.598777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.598977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.599006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.017 qpair failed and we were unable to recover it. 00:20:59.017 [2024-04-24 21:35:24.599214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.017 [2024-04-24 21:35:24.599512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.599537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.599759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.599972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.600001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.600201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.600403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.600431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.600665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.600865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.600895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.601086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.601255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.601280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.601567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.601752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.601778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.601958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.602218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.602243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.602479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.602724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.602750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.602930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.603144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.603172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.603373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.603599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.603634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.603844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.604086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.604113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.604327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.604556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.604583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.604792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.604997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.605024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.605201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.605406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.605430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.605588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.605791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.605817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.606010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.606220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.606249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.606516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.606717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.606743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.606972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.607173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.607200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.607427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.607639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.607665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.607852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.608055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.608083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.608292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.608544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.608574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.608783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.609000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.609024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.609202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.609395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.609423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.609634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.609847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.609873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.610072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.610280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.610305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.610571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.610793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.610823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.611026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.611256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.611284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.611491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.611703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.611733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.611958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.612172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.018 [2024-04-24 21:35:24.612201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.018 qpair failed and we were unable to recover it. 00:20:59.018 [2024-04-24 21:35:24.612432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.612639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.612668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.612840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.613043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.613071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.613274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.613519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.613558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.613758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.613996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.614024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.614252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.614485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.614512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.614740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.614949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.614977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.615179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.615417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.615445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.615643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.615822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.615851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.616081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.616285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.616313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.616517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.616723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.616752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.616963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.617224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.617266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.617493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.617703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.617729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.617944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.618107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.618135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.618366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.618603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.618640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.618874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.619077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.619105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.619340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.619569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.619596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.619846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.620034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.620058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.620264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.620472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.620499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.620727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.620963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.620991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.621200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.621376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.621404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.621609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.621821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.621849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.622054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.622260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.622290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.622488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.622663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.622692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.622949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.623164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.623192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.623425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.623672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.623700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.623906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.624123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.624147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.624380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.624576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.624609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.624860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.625071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.625098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.019 [2024-04-24 21:35:24.625325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.625528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.019 [2024-04-24 21:35:24.625557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.019 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.625772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.625955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.625980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.626165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.626382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.626407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.626657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.626890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.626918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.627125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.627328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.627356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.627573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.627803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.627832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.628009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.628210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.628237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.628446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.628655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.628685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.628921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.629152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.629185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.629361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.629567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.629596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.629803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.630002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.630031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.630254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.630465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.630492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.630670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.630879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.630907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.631137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.631369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.631396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.631635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.631810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.631837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.632060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.632286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.632314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.632481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.632712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.632741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.632949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.633243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.633271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.633503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.633703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.633736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.634002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.634226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.634254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.634462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.634659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.634688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.634896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.635103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.635132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.635336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.635568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.635597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.635817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.635993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.636021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.636237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.636466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.636494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.636703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.636906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.636934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.637139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.637320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.637344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.637539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.637781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.637809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.637987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.638215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.638248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.020 qpair failed and we were unable to recover it. 00:20:59.020 [2024-04-24 21:35:24.638451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.638686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.020 [2024-04-24 21:35:24.638711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.638928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.639143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.639171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.639392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.639611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.639641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.639870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.640045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.640069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.640388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.640622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.640667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.640876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.641087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.641114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.641297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.641480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.641518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.641727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.641952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.641982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.642222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.642453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.642478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.642694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.642869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.642898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.643114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.643397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.643425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.643636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.643856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.643884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.644059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.644285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.644313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.644492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.644722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.644751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.644975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.645216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.645243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.645476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.645707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.645736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.645964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.646166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.646193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.646425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.646655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.646684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.646890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.647095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.647124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.647351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.647578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.647606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.647818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.648046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.648074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.648254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.648439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.648468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.648756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.648962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.648990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.649228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.649402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.649431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.021 qpair failed and we were unable to recover it. 00:20:59.021 [2024-04-24 21:35:24.649666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.021 [2024-04-24 21:35:24.649870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.649894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.650072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.650310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.650337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.650527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.650801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.650830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.651042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.651269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.651297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.651524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.651747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.651772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.651965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.652167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.652191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.652413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.652636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.652665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.652841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.653172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.653201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.653404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.653623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.653678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.653895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.654098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.654126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.654353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.654548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.654573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.654760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.654955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.654983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.655196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.655424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.655452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.655655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.655857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.655885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.656119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.656380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.656419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.656661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.656889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.656917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.657125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.657308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.657336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.657537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.657717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.657746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.657976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.658205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.658232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.658441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.658651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.658680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.658915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.659117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.659145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.659382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.659578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.659606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.659849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.660036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.022 [2024-04-24 21:35:24.660064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.022 qpair failed and we were unable to recover it. 00:20:59.022 [2024-04-24 21:35:24.660266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.297 [2024-04-24 21:35:24.660473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.297 [2024-04-24 21:35:24.660503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.297 qpair failed and we were unable to recover it. 00:20:59.297 [2024-04-24 21:35:24.660743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.297 [2024-04-24 21:35:24.660900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.297 [2024-04-24 21:35:24.660927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.297 qpair failed and we were unable to recover it. 00:20:59.297 [2024-04-24 21:35:24.661179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.297 [2024-04-24 21:35:24.661405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.297 [2024-04-24 21:35:24.661433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.297 qpair failed and we were unable to recover it. 00:20:59.297 [2024-04-24 21:35:24.661661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.297 [2024-04-24 21:35:24.661867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.297 [2024-04-24 21:35:24.661895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.297 qpair failed and we were unable to recover it. 00:20:59.297 [2024-04-24 21:35:24.662102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.297 [2024-04-24 21:35:24.662303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.662331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.662511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.662743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.662771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.662976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.663201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.663229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.663470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.663674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.663704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.663936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.664140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.664167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.664398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.664625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.664655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.664838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.665072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.665097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.665318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.665556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.665584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.665791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.665999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.666026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.666214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.666449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.666477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.666694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.666929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.666957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.667164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.667343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.667367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.667559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.667764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.667794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.668027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.668231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.668260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.668467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.668640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.668669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.668872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.669043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.669068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.669288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.669488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.669517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.669760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.669918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.669957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.670248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.670450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.670478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.670695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.670916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.670946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.671179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.671382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.671410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.671576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.671787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.671816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.672011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.672184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.672211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.672406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.672634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.672663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.672846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.673049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.673077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.673283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.673483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.673512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.298 qpair failed and we were unable to recover it. 00:20:59.298 [2024-04-24 21:35:24.673749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.298 [2024-04-24 21:35:24.673952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.673981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.674304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.674547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.674575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.674771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.674972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.675000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.675182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.675393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.675421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.675664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.675867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.675896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.676095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.676327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.676355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.676565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.676795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.676824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.677053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.677293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.677318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.677550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.677747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.677775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.677997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.678214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.678242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.678445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.678653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.678682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.678885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.679118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.679147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.679368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.679573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.679599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.679789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.680069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.680099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.680278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.680478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.680506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.680749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.680937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.680977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.681210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.681428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.681453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.681661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.681862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.681887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.682095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.682296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.682324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.682557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.682788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.682814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.683032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.683259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.683285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.683492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.683795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.683821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.684008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.684188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.684218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.299 qpair failed and we were unable to recover it. 00:20:59.299 [2024-04-24 21:35:24.684447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.684658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.299 [2024-04-24 21:35:24.684687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.684896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.685101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.685129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.685362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.685566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.685591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.685785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.685974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.686004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.686208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.686410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.686439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.686664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.686884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.686911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.687196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.687434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.687459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.687700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.687904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.687934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.688135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.688372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.688397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.688608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.688777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.688804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.689074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.689313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.689339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.689490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.689682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.689708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.689939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.690094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.690119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.690323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.690504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.690528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.690764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.690945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.690986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.691167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.691402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.691430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.691636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.691840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.691869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.692079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.692292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.692320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.692501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.692706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.692749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.692959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.693155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.693180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.693373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.693530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.693576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.693785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.693986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.694011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.694208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.694379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.694406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.694589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.694809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.694835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.695042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.695266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.695291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.695536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.695746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.695772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.300 qpair failed and we were unable to recover it. 00:20:59.300 [2024-04-24 21:35:24.695980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.300 [2024-04-24 21:35:24.696142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.696167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.696348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.696576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.696603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.696805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.697031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.697059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.697230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.697427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.697466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.697684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.697846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.697876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.698118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.698305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.698329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.698554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.698741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.698767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.698950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.699157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.699185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.699385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.699605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.699639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.699832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.700078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.700103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.700291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.700452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.700478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.700691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.700895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.700924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.701161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.701327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.701352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.701539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.701709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.701738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.701940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.702167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.702200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.702394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.702598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.702626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.702885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.703068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.703110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.703300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.703483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.703508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.703695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.703935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.703964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.704182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.704363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.704390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.704609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.704849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.704874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.705117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.705347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.705375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.705566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.705772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.705815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.706029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.706218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.706243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.301 qpair failed and we were unable to recover it. 00:20:59.301 [2024-04-24 21:35:24.706424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.301 [2024-04-24 21:35:24.706596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.706636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.706827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.707007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.707031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.707219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.707421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.707449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.707651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.707836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.707864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.708070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.708246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.708275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.708472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.708658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.708684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.708845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.709013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.709040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.709223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.709441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.709466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.709683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.709839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.709864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.710084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.710289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.710320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.710525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.710734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.710763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.710980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.711220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.711245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.711500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.711728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.711753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.711984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.712198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.712222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.712441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.712624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.712663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.712891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.713097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.713125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.713381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.713594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.713642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.713866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.714064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.714088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.714238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.714449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.714477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.714709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.714915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.714942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.715145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.715408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.715437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.715691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.715880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.715905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.716061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.716241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.716284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.716512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.716717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.716743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.302 qpair failed and we were unable to recover it. 00:20:59.302 [2024-04-24 21:35:24.716902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.717134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.302 [2024-04-24 21:35:24.717159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.303 qpair failed and we were unable to recover it. 00:20:59.303 [2024-04-24 21:35:24.717441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.717706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.717733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.303 qpair failed and we were unable to recover it. 00:20:59.303 [2024-04-24 21:35:24.717906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.718103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.718128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.303 qpair failed and we were unable to recover it. 00:20:59.303 [2024-04-24 21:35:24.718287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.718447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.718472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.303 qpair failed and we were unable to recover it. 00:20:59.303 [2024-04-24 21:35:24.718699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.718926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.718954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.303 qpair failed and we were unable to recover it. 00:20:59.303 [2024-04-24 21:35:24.719165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.719412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.719437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.303 qpair failed and we were unable to recover it. 00:20:59.303 [2024-04-24 21:35:24.719663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.719866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.719891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.303 qpair failed and we were unable to recover it. 00:20:59.303 [2024-04-24 21:35:24.720119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.720350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.720374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.303 qpair failed and we were unable to recover it. 00:20:59.303 [2024-04-24 21:35:24.720582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.720794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.720822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.303 qpair failed and we were unable to recover it. 00:20:59.303 [2024-04-24 21:35:24.721032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.721257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.721282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.303 qpair failed and we were unable to recover it. 00:20:59.303 [2024-04-24 21:35:24.721501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.721706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.721737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.303 qpair failed and we were unable to recover it. 00:20:59.303 [2024-04-24 21:35:24.721935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.722115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.722139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.303 qpair failed and we were unable to recover it. 00:20:59.303 [2024-04-24 21:35:24.722293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.722507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.722532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.303 qpair failed and we were unable to recover it. 00:20:59.303 [2024-04-24 21:35:24.722743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.722922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.722947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.303 qpair failed and we were unable to recover it. 00:20:59.303 [2024-04-24 21:35:24.723140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.723329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.723354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.303 qpair failed and we were unable to recover it. 00:20:59.303 [2024-04-24 21:35:24.723563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.723769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.723797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.303 qpair failed and we were unable to recover it. 00:20:59.303 [2024-04-24 21:35:24.724003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.724190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.724215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.303 qpair failed and we were unable to recover it. 00:20:59.303 [2024-04-24 21:35:24.724432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.724643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.724669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.303 qpair failed and we were unable to recover it. 00:20:59.303 [2024-04-24 21:35:24.724858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.725010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.725035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.303 qpair failed and we were unable to recover it. 00:20:59.303 [2024-04-24 21:35:24.725271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.725443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.725471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.303 qpair failed and we were unable to recover it. 00:20:59.303 [2024-04-24 21:35:24.725708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.725893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.303 [2024-04-24 21:35:24.725919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.726117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.726342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.726382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.726589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.726834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.726860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.727074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.727250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.727292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.727508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.727744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.727770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.727983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.728196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.728224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.728478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.728713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.728741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.729045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.729264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.729292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.729550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.729740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.729770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.729949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.730221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.730246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.730432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.730672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.730700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.730907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.731135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.731177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.731422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.731611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.731656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.731956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.732185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.732213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.732448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.732619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.732659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.732893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.733065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.733089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.733234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.733446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.733471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.733678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.733866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.733892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.734109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.734396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.734423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.734623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.734838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.734866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.735035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.735241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.735282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.735484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.735678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.735705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.735891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.736084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.736114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.736351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.736644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.736670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.736867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.737072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.304 [2024-04-24 21:35:24.737102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.304 qpair failed and we were unable to recover it. 00:20:59.304 [2024-04-24 21:35:24.737317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.737555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.737583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.737772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.738064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.738092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.738305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.738468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.738494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.738664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.738899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.738927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.739164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.739315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.739340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.739505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.739724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.739749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.739961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.740167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.740194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.740387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.740641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.740670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.740907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.741140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.741168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.741372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.741618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.741655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.741845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.742058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.742084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.742273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.742477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.742506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.742740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.742901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.742926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.743190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.743392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.743421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.743663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.743868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.743909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.744122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.744382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.744410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.744618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.744812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.744837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.745056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.745232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.745260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.745441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.745620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.745655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.745837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.746048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.746077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.746271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.746467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.746493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.746721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.746926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.746954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.747154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.747320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.747345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.747550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.747733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.747759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.305 qpair failed and we were unable to recover it. 00:20:59.305 [2024-04-24 21:35:24.747969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.305 [2024-04-24 21:35:24.748199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.748224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.748434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.748615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.748648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.748834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.749036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.749061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.749275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.749463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.749488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.749718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.749931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.749956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.750186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.750394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.750422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.750613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.750813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.750839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.751023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.751232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.751261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.751491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.751678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.751704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.751935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.752144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.752169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.752360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.752536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.752566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.752788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.752975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.753000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.753205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.753443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.753469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.753656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.753868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.753893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.754055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.754270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.754298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.754499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.754709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.754735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.754896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.755107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.755135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.755419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.755648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.755677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.755893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.756081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.756106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.756319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.756502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.756527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.756803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.756984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.757011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.757294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.757528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.757555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.757794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.757984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.758009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.758194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.758351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.306 [2024-04-24 21:35:24.758377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.306 qpair failed and we were unable to recover it. 00:20:59.306 [2024-04-24 21:35:24.758616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.758830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.758859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.759036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.759245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.759273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.759516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.759731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.759757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.759942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.760216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.760244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.760512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.760731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.760757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.760948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.761131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.761156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.761392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.761591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.761619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.761838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.762054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.762081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.762351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.762592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.762620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.762860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.763120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.763145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.763411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.763597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.763624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.763855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.764091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.764120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.764285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.764488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.764517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.764748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.764962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.764987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.765176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.765406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.765439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.765684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.765946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.765971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.766138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.766304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.766330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.766535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.766770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.766799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.767010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.767290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.767318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.767556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.767765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.767791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.767975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.768197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.768225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.768427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.768608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.768641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.768841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.769049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.769077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.769286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.769447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.769488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.769742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.769905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.769936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.770168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.770354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.770379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.770544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.770732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.770758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.770922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.771170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.307 [2024-04-24 21:35:24.771195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.307 qpair failed and we were unable to recover it. 00:20:59.307 [2024-04-24 21:35:24.771462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.771650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.771694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.308 qpair failed and we were unable to recover it. 00:20:59.308 [2024-04-24 21:35:24.771875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.772076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.772104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.308 qpair failed and we were unable to recover it. 00:20:59.308 [2024-04-24 21:35:24.772312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.772519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.772549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.308 qpair failed and we were unable to recover it. 00:20:59.308 [2024-04-24 21:35:24.772766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.772956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.772982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.308 qpair failed and we were unable to recover it. 00:20:59.308 [2024-04-24 21:35:24.773163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.773352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.773382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.308 qpair failed and we were unable to recover it. 00:20:59.308 [2024-04-24 21:35:24.773618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.773865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.773893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.308 qpair failed and we were unable to recover it. 00:20:59.308 [2024-04-24 21:35:24.774080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.774307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.774341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.308 qpair failed and we were unable to recover it. 00:20:59.308 [2024-04-24 21:35:24.774545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.774732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.774759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.308 qpair failed and we were unable to recover it. 00:20:59.308 [2024-04-24 21:35:24.774945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.775181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.775209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.308 qpair failed and we were unable to recover it. 00:20:59.308 [2024-04-24 21:35:24.775384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.775597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.775622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.308 qpair failed and we were unable to recover it. 00:20:59.308 [2024-04-24 21:35:24.775795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.776017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.776042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.308 qpair failed and we were unable to recover it. 00:20:59.308 [2024-04-24 21:35:24.776197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.776463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.776488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.308 qpair failed and we were unable to recover it. 00:20:59.308 [2024-04-24 21:35:24.776746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.776942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.776982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.308 qpair failed and we were unable to recover it. 00:20:59.308 [2024-04-24 21:35:24.777210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.777449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.777474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.308 qpair failed and we were unable to recover it. 00:20:59.308 [2024-04-24 21:35:24.777664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.777854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.777879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.308 qpair failed and we were unable to recover it. 00:20:59.308 [2024-04-24 21:35:24.778041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.778259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.778285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.308 qpair failed and we were unable to recover it. 00:20:59.308 [2024-04-24 21:35:24.778463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.778667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.778701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.308 qpair failed and we were unable to recover it. 00:20:59.308 [2024-04-24 21:35:24.778933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.779162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.779187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.308 qpair failed and we were unable to recover it. 00:20:59.308 [2024-04-24 21:35:24.779377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.779582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.779609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.308 qpair failed and we were unable to recover it. 00:20:59.308 [2024-04-24 21:35:24.779805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.780041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.780069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.308 qpair failed and we were unable to recover it. 00:20:59.308 [2024-04-24 21:35:24.780249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.780451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.780479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.308 qpair failed and we were unable to recover it. 00:20:59.308 [2024-04-24 21:35:24.780720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.780947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.780972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.308 qpair failed and we were unable to recover it. 00:20:59.308 [2024-04-24 21:35:24.781135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.308 [2024-04-24 21:35:24.781353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.781381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.781595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.781797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.781822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.782006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.782194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.782219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.782405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.782591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.782616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.782841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.783028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.783054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.783244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.783422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.783447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.783606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.783798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.783824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.784013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.784241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.784269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.784501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.784712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.784738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.784930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.785114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.785139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.785322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.785531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.785558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.785768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.785973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.786015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.786207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.786393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.786417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.786636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.786847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.786876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.787081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.787260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.787285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.787479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.787703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.787729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.787945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.788155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.788182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.788387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.788613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.788654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.788825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.789010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.789037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.789208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.789423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.789448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.789649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.789837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.789862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.790051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.790235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.790260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.790469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.790690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.790717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.790901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.791125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.791150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.791357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.791544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.791569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.791764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.791951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.791976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.792185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.792386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.309 [2024-04-24 21:35:24.792415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.309 qpair failed and we were unable to recover it. 00:20:59.309 [2024-04-24 21:35:24.792657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.792941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.792970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.793169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.793330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.793357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.793540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.793729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.793756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.793950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.794138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.794163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.794329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.794486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.794511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.794725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.794984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.795009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.795190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.795371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.795396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.795585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.795769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.795798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.796005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.796209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.796238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.796443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.796687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.796713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.796897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.797105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.797131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.797338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.797525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.797551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.797740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.797975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.798003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.798201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.798395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.798422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.798635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.798826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.798851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.799039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.799259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.799285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.799498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.799689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.799715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.799875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.800084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.800112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.800321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.800480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.800505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.800697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.800897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.800925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.801096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.801322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.801350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.801528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.801809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.801838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.802040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.802230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.802256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.802496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.802691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.802716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.802910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.803074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.803114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.803342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.803553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.803581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.803808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.804087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.804115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.310 [2024-04-24 21:35:24.804345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.804521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.310 [2024-04-24 21:35:24.804550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.310 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.804734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.804948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.804974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.805207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.805407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.805434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.805639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.805870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.805896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.806082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.806313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.806341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.806517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.806720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.806749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.806992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.807246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.807271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.807457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.807644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.807670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.807852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.808052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.808079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.808358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.808533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.808556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.808764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.808967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.808995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.809270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.809491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.809519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.809761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.809951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.809976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.810134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.810376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.810401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.810623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.810845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.810871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.811196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.811351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.811376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.811622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.811864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.811892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.812082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.812294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.812318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.812528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.812736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.812762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.812950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.813109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.813134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.813294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.813486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.813511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.813740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.813935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.813961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.814139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.814377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.814405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.814693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.814955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.814983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.815216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.815447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.815472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.815724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.815889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.815914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.816102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.816358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.816397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.816568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.816758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.311 [2024-04-24 21:35:24.816786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.311 qpair failed and we were unable to recover it. 00:20:59.311 [2024-04-24 21:35:24.816970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.817128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.817169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.817379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.817662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.817690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.817923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.818133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.818158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.818321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.818514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.818540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.818806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.819002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.819027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.819220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.819498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.819523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.819741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.819973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.820002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.820211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.820471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.820501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.820737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.820931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.820957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.821141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.821367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.821395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.821614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.821815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.821840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.821997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.822193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.822221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.822393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.822633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.822662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.822850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.823125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.823154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.823369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.823580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.823605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.823827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.824056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.824086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.824317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.824529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.824555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.824741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.824929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.824973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.825185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.825439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.825480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.825702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.825998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.826041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.826284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.826518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.826543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.826729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.826913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.826952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.827210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.827410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.827435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.827643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.827873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.827900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.828151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.828407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.828449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.828828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.829084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.829130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.829405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.829638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.829663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.829871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.830116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.830159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.312 [2024-04-24 21:35:24.830371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.830546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.312 [2024-04-24 21:35:24.830573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.312 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.830783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.831044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.831086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.831299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.831472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.831498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.831707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.831937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.831980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.832209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.832407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.832432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.832722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.832961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.833003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.833218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.833419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.833445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.833659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.833849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.833901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.834156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.834404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.834453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.834689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.835031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.835075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.835294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.835498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.835523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.835706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.835963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.836007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.836246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.836508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.836549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.836751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.836977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.837019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.837226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.837478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.837521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.837725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.837956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.837989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.838220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.838447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.838490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.838702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.838937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.838980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.839199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.839421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.839462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.839678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.839918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.839961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.840196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.840430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.840457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.840659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.840868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.840893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.841105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.841334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.841378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.841540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.841778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.841821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.313 qpair failed and we were unable to recover it. 00:20:59.313 [2024-04-24 21:35:24.842033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.313 [2024-04-24 21:35:24.842256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.842305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.842492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.842741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.842789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.843004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.843233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.843276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.843491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.843670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.843714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.843958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.844182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.844224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.844410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.844641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.844666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.844871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.845142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.845185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.845422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.845621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.845654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.845867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.846071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.846112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.846328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.846540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.846565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.846732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.846946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.846991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.847224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.847429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.847477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.847696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.847935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.847978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.848228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.848461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.848503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.848690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.848893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.848935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.849138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.849334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.849376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.849558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.849751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.849776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.849984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.850203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.850245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.850434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.850635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.850661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.850852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.851073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.851114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.851352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.851562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.851588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.851790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.852000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.852027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.852329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.852533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.852559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.852719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.852961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.852988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.853237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.853426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.853451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.853684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.853937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.853979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.854207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.854434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.854477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.854646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.854863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.854908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.855159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.855378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.314 [2024-04-24 21:35:24.855419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.314 qpair failed and we were unable to recover it. 00:20:59.314 [2024-04-24 21:35:24.855612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.855833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.855875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.856061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.856314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.856354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.856542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.856730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.856755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.857018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.857261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.857304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.857495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.857714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.857740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.857979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.858211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.858238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.858514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.858740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.858782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.859040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.859296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.859337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.859530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.859713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.859739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.859983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.860204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.860248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.860525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.860755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.860780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.861030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.861230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.861273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.861469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.861682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.861707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.861906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.862171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.862213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.862435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.862665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.862690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.862904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.863109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.863151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.863392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.863592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.863616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.863864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.864120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.864161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.864351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.864575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.864599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.864818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.865011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.865037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.865273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.865520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.865546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.865746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.865983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.866025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.866254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.866478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.866523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.866697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.866912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.866953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.867169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.867362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.867402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.867586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.867771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.867796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.868003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.868265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.868306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.868498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.868707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.868732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.315 [2024-04-24 21:35:24.868946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.869176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.315 [2024-04-24 21:35:24.869217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.315 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.869455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.869682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.869707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.869948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.870171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.870217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.870380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.870573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.870598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.870818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.871071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.871115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.871329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.871509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.871534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.871766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.872017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.872043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.872253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.872481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.872505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.872707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.872913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.872956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.873174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.873407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.873433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.873612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.873828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.873873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.874080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.874338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.874380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.874566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.874773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.874800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.875007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.875233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.875280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.875490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.875716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.875742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.875955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.876217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.876259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.876497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.876746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.876789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.877023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.877256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.877284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.877508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.877708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.877733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.877948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.878146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.878187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.878397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.878597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.878623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.878836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.879038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.879079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.879329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.879555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.879580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.879759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.880016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.880056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.880297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.880475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.880500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.880740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.880992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.881035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.881272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.881475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.881500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.881708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.881931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.881975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.882214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.882413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.882438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.316 qpair failed and we were unable to recover it. 00:20:59.316 [2024-04-24 21:35:24.882621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.882812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.316 [2024-04-24 21:35:24.882854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.883069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.883294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.883336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.883522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.883709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.883735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.883943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.884142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.884184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.884402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.884603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.884633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.884829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.885012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.885054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.885277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.885478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.885522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.885758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.886017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.886059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.886302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.886479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.886504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.886715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.886925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.886967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.887171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.887361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.887403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.887615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.887843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.887869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.888069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.888322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.888364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.888577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.888790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.888816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.889025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.889252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.889294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.889532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.889733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.889759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.889932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.890192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.890233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.890444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.890686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.890711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.890927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.891149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.891191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.891411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.891613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.891644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.891832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.892047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.892089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.892274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.892539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.892581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.892767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.892947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.892975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.893238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.893458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.317 [2024-04-24 21:35:24.893502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.317 qpair failed and we were unable to recover it. 00:20:59.317 [2024-04-24 21:35:24.893712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.893924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.893967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.894209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.894435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.894477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.894748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.894967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.895009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.895218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.895454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.895481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.895743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.895966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.895991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.896231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.896436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.896478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.896658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.896866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.896894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.897122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.897354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.897382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.897581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.897819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.897863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.898060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.898273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.898317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.898505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.898694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.898736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.898918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.899142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.899184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.899393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.899598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.899623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.899841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.900063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.900105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.900340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.900539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.900564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.900750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.900994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.901022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.901241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.901440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.901482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.901730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.901951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.901996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.902207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.902382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.902423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.902647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.902836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.902879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.903124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.903352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.903394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.903576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.903785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.903810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.904023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.904218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.904261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.904514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.904763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.904807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.905059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.905358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.905399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.905616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.905790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.905815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.906052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.906275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.906317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.906488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.906726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.906769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.907016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.907273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.318 [2024-04-24 21:35:24.907314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.318 qpair failed and we were unable to recover it. 00:20:59.318 [2024-04-24 21:35:24.907519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.907690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.907718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.907995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.908231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.908273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.908495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.908749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.908777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.909075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.909314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.909361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.909582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.909791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.909817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.910037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.910267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.910309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.910544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.910825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.910867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.911079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.911333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.911360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.911565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.911752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.911778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.911968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.912199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.912241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.912479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.912670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.912699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.912917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.913181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.913222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.913459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.913661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.913687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.913902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.914130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.914176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.914381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.914609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.914643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.914852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.915113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.915154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.915368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.915594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.915619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.915857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.916056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.916098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.916297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.916531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.916556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.916744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.916959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.916999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.917233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.917438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.917480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.917693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.917920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.917962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.918195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.918408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.918434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.918707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.918922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.918969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.919185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.919437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.919479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.919684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.919872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.919915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.920160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.920397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.920426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.920653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.920828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.920871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.319 [2024-04-24 21:35:24.921113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.921333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.319 [2024-04-24 21:35:24.921376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.319 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.921533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.921741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.921784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.922011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.922241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.922283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.922570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.922823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.922866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.923174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.923412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.923456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.923678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.923861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.923907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.924115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.924371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.924412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.924590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.924794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.924820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.925009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.925261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.925302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.925474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.925651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.925676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.925892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.926183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.926224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.926544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.926754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.926779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.927016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.927419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.927467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.927690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.928089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.928139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.928378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.928598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.928621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.928851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.929085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.929113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.929335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.929540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.929565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.929744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.929958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.930000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.930203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.930458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.930499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.930726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.930965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.930991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.931187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.931439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.931480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.931682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.931924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.931953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.932216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.932390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.932417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.932602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.932903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.932944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.933128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.933417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.933459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.933666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.933885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.933930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.934194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.934400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.934426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.934617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.934822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.934862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.935123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.935322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.935364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.320 [2024-04-24 21:35:24.935620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.935815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.320 [2024-04-24 21:35:24.935858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.320 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.936069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.936294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.936335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.936529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.936794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.936837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.937036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.937294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.937337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.937555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.937784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.937831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.938065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.938287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.938315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.938501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.938671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.938715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.938929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.939157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.939201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.939417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.939593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.939617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.939915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.940170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.940211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.940389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.940688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.940733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.940920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.941180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.941222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.941428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.941616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.941648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.941859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.942084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.942132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.942344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.942566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.942590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.942837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.943077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.943107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.943316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.943501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.943529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.943744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.943930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.943956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.944142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.944366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.944390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.944578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.944791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.944837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.945078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.945315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.945343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.945541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.945778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.945821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.946036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.946258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.946303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.946465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.946681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.946729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.946929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.947120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.947162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.947350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.947534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.947559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.947746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.947963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.948005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.948200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.948402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.948426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.948586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.948828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.948873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.321 qpair failed and we were unable to recover it. 00:20:59.321 [2024-04-24 21:35:24.949093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.321 [2024-04-24 21:35:24.949296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.949320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.322 [2024-04-24 21:35:24.949472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.949657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.949682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.322 [2024-04-24 21:35:24.949920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.950173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.950216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.322 [2024-04-24 21:35:24.950431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.950586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.950610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.322 [2024-04-24 21:35:24.950855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.951120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.951162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.322 [2024-04-24 21:35:24.951377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.951563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.951588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.322 [2024-04-24 21:35:24.951820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.952073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.952116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.322 [2024-04-24 21:35:24.952331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.952522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.952546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.322 [2024-04-24 21:35:24.952794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.953053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.953096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.322 [2024-04-24 21:35:24.953310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.953495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.953520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.322 [2024-04-24 21:35:24.953718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.953985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.954013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.322 [2024-04-24 21:35:24.954217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.954442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.954465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.322 [2024-04-24 21:35:24.954712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.954922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.954964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.322 [2024-04-24 21:35:24.955231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.955397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.955421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.322 [2024-04-24 21:35:24.955612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.955811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.955853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.322 [2024-04-24 21:35:24.956126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.956346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.956395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.322 [2024-04-24 21:35:24.956592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.956813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.956856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.322 [2024-04-24 21:35:24.957063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.957318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.957360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.322 [2024-04-24 21:35:24.957568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.957752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.957779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.322 [2024-04-24 21:35:24.958020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.958272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.958315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.322 [2024-04-24 21:35:24.958502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.958738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.958781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.322 [2024-04-24 21:35:24.958993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.959197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.959244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.322 [2024-04-24 21:35:24.959456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.959685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.322 [2024-04-24 21:35:24.959713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.322 qpair failed and we were unable to recover it. 00:20:59.595 [2024-04-24 21:35:24.959968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.960140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.960168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.595 qpair failed and we were unable to recover it. 00:20:59.595 [2024-04-24 21:35:24.960356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.960565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.960590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.595 qpair failed and we were unable to recover it. 00:20:59.595 [2024-04-24 21:35:24.960832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.961061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.961103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.595 qpair failed and we were unable to recover it. 00:20:59.595 [2024-04-24 21:35:24.961319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.961517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.961541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.595 qpair failed and we were unable to recover it. 00:20:59.595 [2024-04-24 21:35:24.961721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.961919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.961965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.595 qpair failed and we were unable to recover it. 00:20:59.595 [2024-04-24 21:35:24.962209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.962389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.962413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.595 qpair failed and we were unable to recover it. 00:20:59.595 [2024-04-24 21:35:24.962599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.962827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.962869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.595 qpair failed and we were unable to recover it. 00:20:59.595 [2024-04-24 21:35:24.963106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.963306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.963348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.595 qpair failed and we were unable to recover it. 00:20:59.595 [2024-04-24 21:35:24.963563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.963733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.963759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.595 qpair failed and we were unable to recover it. 00:20:59.595 [2024-04-24 21:35:24.963972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.964203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.964246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.595 qpair failed and we were unable to recover it. 00:20:59.595 [2024-04-24 21:35:24.964466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.964657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.964684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.595 qpair failed and we were unable to recover it. 00:20:59.595 [2024-04-24 21:35:24.964901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.965128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.965169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.595 qpair failed and we were unable to recover it. 00:20:59.595 [2024-04-24 21:35:24.965358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.965567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.965593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.595 qpair failed and we were unable to recover it. 00:20:59.595 [2024-04-24 21:35:24.965814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.966043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.966086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.595 qpair failed and we were unable to recover it. 00:20:59.595 [2024-04-24 21:35:24.966398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.966611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.966644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.595 qpair failed and we were unable to recover it. 00:20:59.595 [2024-04-24 21:35:24.966857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.967093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.967135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.595 qpair failed and we were unable to recover it. 00:20:59.595 [2024-04-24 21:35:24.967339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.967520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.967545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.595 qpair failed and we were unable to recover it. 00:20:59.595 [2024-04-24 21:35:24.967726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.967948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.595 [2024-04-24 21:35:24.967975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.595 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.968204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.968437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.968462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.968652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.968856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.968899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.969108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.969337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.969379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.969594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.969835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.969863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.970095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.970298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.970343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.970510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.970744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.970772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.971005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.971225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.971267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.971454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.971648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.971673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.971858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.972100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.972143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.972357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.972555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.972580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.972806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.973036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.973077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.973316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.973577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.973602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.973858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.974063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.974104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.974322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.974558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.974584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.974840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.975070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.975100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.975356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.975617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.975658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.975813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.976027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.976070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.976270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.976472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.976495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.976713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.976982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.977025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.977207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.977435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.977460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.977686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.977941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.977966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.978204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.978411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.978437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.978694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.978937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.978982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.979190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.979393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.979418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.979653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.979846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.979889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.980102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.980348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.980389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.980590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.980815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.980861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.981074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.981308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.981334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.596 qpair failed and we were unable to recover it. 00:20:59.596 [2024-04-24 21:35:24.981525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.596 [2024-04-24 21:35:24.981692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.981718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.981909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.982117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.982162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.982347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.982559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.982584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.982830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.983058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.983102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.983282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.983460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.983485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.983692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.983957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.984000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.984186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.984389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.984414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.984566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.984811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.984854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.985068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.985294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.985337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.985522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.985706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.985754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.985993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.986252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.986293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.986488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.986724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.986767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.986977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.987206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.987252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.987418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.987651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.987677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.987893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.988089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.988132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.988372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.988573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.988598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.988790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.989016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.989042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.989252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.989456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.989481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.989683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.989884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.989909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.990114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.990295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.990324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.990538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.990775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.990804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.991056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.991297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.991324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.991499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.991667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.991694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.991910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.992116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.992159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.992342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.992529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.992554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.992783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.993006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.993051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.993259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.993443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.993468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.993669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.993915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.993957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.597 [2024-04-24 21:35:24.994142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.994370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.597 [2024-04-24 21:35:24.994395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.597 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:24.994556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.994747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.994794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:24.995043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.995296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.995337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:24.995526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.995768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.995810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:24.996022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.996233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.996259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:24.996425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.996640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.996666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:24.996881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.997113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.997158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:24.997396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.997604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.997636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:24.997845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.998060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.998102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:24.998357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.998543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.998569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:24.998781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.998986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.999029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:24.999242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.999448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.999478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:24.999659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.999843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:24.999885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:25.000074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.000326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.000369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:25.000581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.000771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.000814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:25.001027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.001225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.001251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:25.001434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.001634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.001660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:25.001879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.002143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.002184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:25.002371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.002575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.002601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:25.002821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.003047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.003093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:25.003288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.003491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.003515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:25.003717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.003944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.003986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:25.004197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.004407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.004431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:25.004616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.004795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.004837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:25.005078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.005342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.005382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:25.005544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.005782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.005825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:25.006070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.006235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.006260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:25.006447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.006683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.006716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:25.006948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.007164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.007205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.598 qpair failed and we were unable to recover it. 00:20:59.598 [2024-04-24 21:35:25.007416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.007602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.598 [2024-04-24 21:35:25.007633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.007868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.008133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.008173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.008424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.008607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.008642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.008835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.009074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.009099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.009334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.009568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.009593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.009786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.010010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.010052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.010289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.010515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.010540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.010754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.011009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.011056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.011269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.011473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.011498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.011734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.011965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.011992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.012227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.012432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.012457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.012650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.012839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.012882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.013101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.013296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.013338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.013541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.013750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.013798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.014011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.014235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.014279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.014478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.014666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.014692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.014885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.015115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.015159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.015373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.015576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.015601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.015821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.016043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.016085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.016287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.016513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.016538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.016749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.016989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.017032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.017267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.017466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.017491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.017724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.017959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.017987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.018222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.018447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.018472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.599 qpair failed and we were unable to recover it. 00:20:59.599 [2024-04-24 21:35:25.018689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.599 [2024-04-24 21:35:25.018898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.018940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.019147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.019353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.019395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.019585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.019766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.019793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.019954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.020140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.020186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.020409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.020569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.020595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.020813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.021041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.021084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.021321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.021517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.021542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.021750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.022013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.022056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.022282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.022470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.022495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.022720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.022939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.022964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.023130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.023343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.023368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.023553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.023735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.023778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.024021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.024250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.024292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.024505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.024720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.024763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.025002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.025227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.025271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.025458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.025648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.025678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.025928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.026159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.026201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.026408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.026591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.026616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.026807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.027012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.027056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.027300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.027507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.027534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.027727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.027931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.027973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.028218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.028418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.028444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.028637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.028846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.028889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.029105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.029331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.029356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.029537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.029725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.029752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.029993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.030253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.030297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.030483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.030727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.030756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.600 [2024-04-24 21:35:25.030951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.031194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.600 [2024-04-24 21:35:25.031237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.600 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.031446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.031679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.031708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.031972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.032188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.032231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.032405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.032645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.032671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.032862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.033114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.033157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.033403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.033606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.033639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.033891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.034083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.034127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.034314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.034519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.034545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.034759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.035017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.035059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.035297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.035529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.035555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.035774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.036003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.036046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.036266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.036492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.036517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.036767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.036994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.037039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.037249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.037443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.037467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.037655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.037842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.037885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.038147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.038375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.038402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.038593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.038820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.038847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.039027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.039245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.039291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.039503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.039713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.039756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.039962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.040216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.040258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.040469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.040652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.040679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.040917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.041180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.041222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.041438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.041659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.041685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.041937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.042194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.042236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.042450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.042682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.042714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.042961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.043189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.043232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.043479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.043707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.043752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.043938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.044147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.044176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.044400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.044613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.044645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.601 qpair failed and we were unable to recover it. 00:20:59.601 [2024-04-24 21:35:25.044857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.045089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.601 [2024-04-24 21:35:25.045132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.045287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.045468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.045495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.045727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.045984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.046025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.046220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.046417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.046447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.046642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.046855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.046899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.047120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.047330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.047356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.047545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.047730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.047774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.047977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.048208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.048250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.048442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.048635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.048662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.048849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.049107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.049149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.049334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.049518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.049543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.049710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.049920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.049946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.050165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.050355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.050381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.050566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.050781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.050807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.050994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.051176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.051200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.051372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.051557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.051582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.051759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.051947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.051972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.052145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.052378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.052403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.052614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.052793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.052836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.053083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.053279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.053327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.053517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.053704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.053730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.053917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.054175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.054217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.054408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.054620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.054657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.054869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.055099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.055141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.055368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.055571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.055595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.055778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.055989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.056033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.056261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.056439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.056464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.056640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.056852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.056895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.057141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.057320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.057344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.602 qpair failed and we were unable to recover it. 00:20:59.602 [2024-04-24 21:35:25.057529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.602 [2024-04-24 21:35:25.057716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.057759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.057941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.058156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.058206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.058421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.058605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.058636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.058833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.059066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.059108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.059317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.059549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.059585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.059807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.060035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.060077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.060318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.060545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.060570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.060757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.060957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.060999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.061238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.061447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.061473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.061672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.061893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.061934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.062200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.062463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.062488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.062651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.062859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.062908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.063129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.063349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.063392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.063550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.063787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.063830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.064090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.064304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.064333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.064521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.064756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.064785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.065040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.065247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.065291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.065452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.065609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.065641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.065847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.066071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.066098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.066333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.066539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.066564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.066746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.066997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.067026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.067256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.067449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.067474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.067677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.067909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.067952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.068192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.068434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.068479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.068702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.068932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.068979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.069212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.069443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.069471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.069701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.069954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.069995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.070177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.070404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.070447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.070675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.070866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.070909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.603 qpair failed and we were unable to recover it. 00:20:59.603 [2024-04-24 21:35:25.071094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.603 [2024-04-24 21:35:25.071349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.071391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.071546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.071781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.071825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.072012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.072274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.072323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.072481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.072685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.072729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.072940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.073198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.073240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.073425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.073612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.073643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.073830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.074036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.074080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.074274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.074472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.074497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.074706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.074917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.074945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.075171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.075399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.075441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.075626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.075833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.075858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.076071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.076323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.076350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.076575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.076766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.076793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.077006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.077236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.077278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.077488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.077705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.077731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.077951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.078178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.078220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.078472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.078716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.078761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.078964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.079185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.079232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.079448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.079675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.079701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.079895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.080146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.080188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.080428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.080653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.080679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.080882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.081134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.081175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.081375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.081557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.081581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.081783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.081962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.082004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.604 qpair failed and we were unable to recover it. 00:20:59.604 [2024-04-24 21:35:25.082216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.604 [2024-04-24 21:35:25.082446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.082474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.082699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.082910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.082950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.083167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.083393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.083433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.083618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.083813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.083838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.084072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.084324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.084366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.084578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.084764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.084791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.084999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.085218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.085262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.085471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.085723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.085748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.085959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.086194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.086236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.086478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.086654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.086680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.086901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.087122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.087168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.087354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.087554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.087579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.087792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.088023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.088066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.088301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.088527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.088552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.088738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.088980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.089007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.089203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.089399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.089426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.089623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.089793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.089818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.090033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.090286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.090328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.090481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.090663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.090692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.090896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.091119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.091163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.091372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.091578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.091603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.091832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.092011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.092052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.092275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.092497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.092522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.092717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.092958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.092986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.093241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.093472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.093515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.093726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.093957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.094000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.094242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.094500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.094541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.094749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.094974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.095017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.605 qpair failed and we were unable to recover it. 00:20:59.605 [2024-04-24 21:35:25.095230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.605 [2024-04-24 21:35:25.095450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.095492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.095728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.095934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.095977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.096191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.096452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.096494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.096702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.096960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.097000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.097245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.097470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.097495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.097705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.097931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.097972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.098174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.098430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.098471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.098661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.098873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.098915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.099136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.099311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.099352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.099538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.099774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.099819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.100032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.100259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.100302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.100487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.100706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.100749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.100966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.101176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.101217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.101428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.101611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.101644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.101868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.102062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.102103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.102315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.102540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.102565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.102749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.102990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.103034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.103273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.103522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.103564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.103749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.103961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.104003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.104204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.104435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.104478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.104688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.104917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.104959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.105162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.105416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.105458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.105652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.105857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.105900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.106104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.106366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.106408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.106597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.106760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.106788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.106993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.107191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.107234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.107476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.107654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.107690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.107927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.108150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.108195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.606 [2024-04-24 21:35:25.108402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.108608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.606 [2024-04-24 21:35:25.108640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.606 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.108837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.109051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.109093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.109339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.109541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.109564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.109728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.109969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.110010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.110256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.110457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.110483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.110682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.110910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.110937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.111140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.111373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.111414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.111605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.111791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.111834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.112068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.112264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.112307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.112492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.112726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.112774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.112971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.113156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.113198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.113401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.113615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.113649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.113887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.114144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.114186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.114427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.114639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.114666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.114847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.115071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.115113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.115353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.115554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.115578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.115749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.115997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.116041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.116279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.116472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.116514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.116707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.116918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.116961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.117176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.117415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.117442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.117656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.117862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.117905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.118114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.118343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.118384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.118578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.118763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.118805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.119041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.119268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.119309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.119492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.119654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.119681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.119902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.120136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.120178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.120398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.120571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.120595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.607 qpair failed and we were unable to recover it. 00:20:59.607 [2024-04-24 21:35:25.120783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.607 [2024-04-24 21:35:25.120965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.121007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.121225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.121449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.121493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.121700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.121904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.121948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.122166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.122341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.122381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.122590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.122826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.122853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.123126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.123413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.123456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.123646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.123866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.123905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.124152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.124379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.124420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.124612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.124799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.124825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.125010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.125240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.125282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.125490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.125738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.125763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.125990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.126219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.126261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.126484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.126759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.126799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.127041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.127295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.127338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.127540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.127742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.127767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.128012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.128217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.128244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.128511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.128695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.128719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.128910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.129130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.129176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.129443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.129674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.129699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.129950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.130137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.130183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.130368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.130581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.130606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.130823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.131035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.131078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.131279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.131510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.131534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.131775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.132004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.132047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.608 qpair failed and we were unable to recover it. 00:20:59.608 [2024-04-24 21:35:25.132234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.608 [2024-04-24 21:35:25.132489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.132532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.609 [2024-04-24 21:35:25.132747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.132977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.133019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.609 [2024-04-24 21:35:25.133263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.133493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.133518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.609 [2024-04-24 21:35:25.133752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.133977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.134019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.609 [2024-04-24 21:35:25.134230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.134451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.134479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.609 [2024-04-24 21:35:25.134709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.134999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.135045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.609 [2024-04-24 21:35:25.135286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.135486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.135511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.609 [2024-04-24 21:35:25.135740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.135981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.136008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.609 [2024-04-24 21:35:25.136210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.136433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.136475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.609 [2024-04-24 21:35:25.136731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.137021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.137063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.609 [2024-04-24 21:35:25.137345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.137602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.137648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.609 [2024-04-24 21:35:25.137905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.138108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.138150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.609 [2024-04-24 21:35:25.138407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.138638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.138664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.609 [2024-04-24 21:35:25.138846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.139090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.139135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.609 [2024-04-24 21:35:25.139318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.139609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.139658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.609 [2024-04-24 21:35:25.139849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.140024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.140071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.609 [2024-04-24 21:35:25.140287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.140558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.140600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.609 [2024-04-24 21:35:25.140801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.141009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.141050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.609 [2024-04-24 21:35:25.141271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.141492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.141536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.609 [2024-04-24 21:35:25.141735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.141947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.141988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.609 [2024-04-24 21:35:25.142236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.142429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.142454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.609 [2024-04-24 21:35:25.142603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.142825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.142851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.609 [2024-04-24 21:35:25.143104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.143392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.609 [2024-04-24 21:35:25.143434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.609 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.143623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.143842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.143867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.144138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.144384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.144427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.144689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.145057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.145119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.145372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.145560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.145586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.145763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.145957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.145985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.146233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.146519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.146561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.146889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.147045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.147070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.147325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.147555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.147581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.147816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.148072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.148114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.148334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.148559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.148584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.148803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.149023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.149069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.149259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.149488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.149533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.149732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.149963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.150005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.150192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.150473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.150516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.150752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.151016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.151058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.151261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.151461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.151486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.151701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.151890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.151918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.152147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.152378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.152419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.152797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.153022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.153065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.153278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.153536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.153577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.153798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.154026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.154068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.154274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.154472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.154496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.154694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.154911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.154955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.155229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.155516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.155562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.155777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.156006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.156048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.156255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.156488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.156516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.156775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.156995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.157041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.157213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.157465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.157507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.610 qpair failed and we were unable to recover it. 00:20:59.610 [2024-04-24 21:35:25.157730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.157963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.610 [2024-04-24 21:35:25.158007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.158331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.158536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.158561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.158809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.159228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.159294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.159523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.159708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.159733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.159954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.160310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.160351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.160678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.160898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.160923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.161108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.161360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.161403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.161590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.161775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.161799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.162022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.162220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.162262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.162466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.162752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.162799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.162966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.163179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.163221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.163437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.163671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.163706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.163892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.164093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.164134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.164448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.164691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.164717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.164929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.165150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.165192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.165610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.165800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.165824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.166017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.166344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.166401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.166558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.166743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.166771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.166984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.167237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.167264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.167489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.167668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.167693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.167882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.168117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.168145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.168373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.168586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.168611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.168855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.169085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.169127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.169372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.169604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.169643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.169871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.170059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.170086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.170320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.170523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.170550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.170813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.171048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.171074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.171295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.171495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.171535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.171736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.171975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.172003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.611 qpair failed and we were unable to recover it. 00:20:59.611 [2024-04-24 21:35:25.172273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.611 [2024-04-24 21:35:25.172498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.172522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.172717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.172922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.172962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.173178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.173375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.173401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.173639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.173823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.173848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.174060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.174262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.174304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.174574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.174794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.174819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.175072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.175338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.175380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.175564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.175825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.175850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.176086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.176319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.176361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.176567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.176760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.176785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.177004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.177228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.177269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.177488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.177694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.177738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.177992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.178279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.178321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.178536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.178707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.178733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.178945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.179206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.179247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.179473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.179692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.179736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.179925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.180155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.180197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.180417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.180601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.180648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.180862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.181121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.181163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.181353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.181579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.181604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.181806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.182010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.182053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.182436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.182622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.182656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.182867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.183083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.183107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.183327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.183486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.183525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.183736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.183928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.183954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.184160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.184451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.184477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.184684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.184917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.184960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.185171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.185372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.185398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.612 [2024-04-24 21:35:25.185586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.185816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.612 [2024-04-24 21:35:25.185860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.612 qpair failed and we were unable to recover it. 00:20:59.613 [2024-04-24 21:35:25.186151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.186355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.186380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.613 qpair failed and we were unable to recover it. 00:20:59.613 [2024-04-24 21:35:25.186569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.186783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.186826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.613 qpair failed and we were unable to recover it. 00:20:59.613 [2024-04-24 21:35:25.187032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.187411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.187472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.613 qpair failed and we were unable to recover it. 00:20:59.613 [2024-04-24 21:35:25.187678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.187985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.188030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.613 qpair failed and we were unable to recover it. 00:20:59.613 [2024-04-24 21:35:25.188266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.188546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.188571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.613 qpair failed and we were unable to recover it. 00:20:59.613 [2024-04-24 21:35:25.188876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.189112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.189142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.613 qpair failed and we were unable to recover it. 00:20:59.613 [2024-04-24 21:35:25.189380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.189566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.189591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.613 qpair failed and we were unable to recover it. 00:20:59.613 [2024-04-24 21:35:25.189809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.190068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.190109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.613 qpair failed and we were unable to recover it. 00:20:59.613 [2024-04-24 21:35:25.190313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.190557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.190586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.613 qpair failed and we were unable to recover it. 00:20:59.613 [2024-04-24 21:35:25.190840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.191080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.191122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.613 qpair failed and we were unable to recover it. 00:20:59.613 [2024-04-24 21:35:25.191355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.191588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.191613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.613 qpair failed and we were unable to recover it. 00:20:59.613 [2024-04-24 21:35:25.191805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.192039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.192066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.613 qpair failed and we were unable to recover it. 00:20:59.613 [2024-04-24 21:35:25.192277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.192467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.192491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.613 qpair failed and we were unable to recover it. 00:20:59.613 [2024-04-24 21:35:25.192686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.192945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.192987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.613 qpair failed and we were unable to recover it. 00:20:59.613 [2024-04-24 21:35:25.193228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.193434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.193462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.613 qpair failed and we were unable to recover it. 00:20:59.613 [2024-04-24 21:35:25.193649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.193881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.193923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.613 qpair failed and we were unable to recover it. 00:20:59.613 [2024-04-24 21:35:25.194163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.194387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.194413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.613 qpair failed and we were unable to recover it. 00:20:59.613 [2024-04-24 21:35:25.194601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.194836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.194863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.613 qpair failed and we were unable to recover it. 00:20:59.613 [2024-04-24 21:35:25.195045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.195249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.195290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.613 qpair failed and we were unable to recover it. 00:20:59.613 [2024-04-24 21:35:25.195484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.195696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.195740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.613 qpair failed and we were unable to recover it. 00:20:59.613 [2024-04-24 21:35:25.195981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.196272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.196297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.613 qpair failed and we were unable to recover it. 00:20:59.613 [2024-04-24 21:35:25.196510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.196722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.613 [2024-04-24 21:35:25.196766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.613 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.196992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.197178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.197205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.197389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.197573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.197598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.197859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.198061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.198105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.198329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.198515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.198540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.198729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.198966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.199007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.199190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.199434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.199460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.199617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.199851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.199897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.200082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.200284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.200310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.200499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.200732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.200761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.200990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.201226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.201274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.201458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.201669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.201695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.201909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.202130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.202174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.202384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.202615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.202646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.202844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.203082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.203109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.203349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.203555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.203580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.203767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.203984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.204026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.204241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.204421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.204446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.204640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.204839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.204866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.205044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.205285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.205312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.205539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.205712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.205738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.205906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.206119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.206144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.206331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.206496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.206521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.206712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.206881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.614 [2024-04-24 21:35:25.206906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.614 qpair failed and we were unable to recover it. 00:20:59.614 [2024-04-24 21:35:25.207124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.207309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.207336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.615 qpair failed and we were unable to recover it. 00:20:59.615 [2024-04-24 21:35:25.207520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.207715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.207741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.615 qpair failed and we were unable to recover it. 00:20:59.615 [2024-04-24 21:35:25.207928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.208119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.208152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.615 qpair failed and we were unable to recover it. 00:20:59.615 [2024-04-24 21:35:25.208342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.208532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.208557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.615 qpair failed and we were unable to recover it. 00:20:59.615 [2024-04-24 21:35:25.208747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.208930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.208955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.615 qpair failed and we were unable to recover it. 00:20:59.615 [2024-04-24 21:35:25.209134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.209319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.209344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.615 qpair failed and we were unable to recover it. 00:20:59.615 [2024-04-24 21:35:25.209509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.209725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.209751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.615 qpair failed and we were unable to recover it. 00:20:59.615 [2024-04-24 21:35:25.209912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.210124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.210169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.615 qpair failed and we were unable to recover it. 00:20:59.615 [2024-04-24 21:35:25.210365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.210531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.210556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.615 qpair failed and we were unable to recover it. 00:20:59.615 [2024-04-24 21:35:25.210751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.210961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.210989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.615 qpair failed and we were unable to recover it. 00:20:59.615 [2024-04-24 21:35:25.211238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.211423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.211447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.615 qpair failed and we were unable to recover it. 00:20:59.615 [2024-04-24 21:35:25.211639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.211822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.211847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.615 qpair failed and we were unable to recover it. 00:20:59.615 [2024-04-24 21:35:25.212041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.212290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.212341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.615 qpair failed and we were unable to recover it. 00:20:59.615 [2024-04-24 21:35:25.212530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.212721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.212747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.615 qpair failed and we were unable to recover it. 00:20:59.615 [2024-04-24 21:35:25.212924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.213126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.213169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.615 qpair failed and we were unable to recover it. 00:20:59.615 [2024-04-24 21:35:25.213418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.213620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.213651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.615 qpair failed and we were unable to recover it. 00:20:59.615 [2024-04-24 21:35:25.213824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.213996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.214021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.615 qpair failed and we were unable to recover it. 00:20:59.615 [2024-04-24 21:35:25.214231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.214413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.214438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.615 qpair failed and we were unable to recover it. 00:20:59.615 [2024-04-24 21:35:25.214652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.214844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.214869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.615 qpair failed and we were unable to recover it. 00:20:59.615 [2024-04-24 21:35:25.215052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.215262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.215305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.615 qpair failed and we were unable to recover it. 00:20:59.615 [2024-04-24 21:35:25.215492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.215658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.215684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.615 qpair failed and we were unable to recover it. 00:20:59.615 [2024-04-24 21:35:25.215850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.216068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.216113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.615 qpair failed and we were unable to recover it. 00:20:59.615 [2024-04-24 21:35:25.216321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.615 [2024-04-24 21:35:25.216493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.216524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.216694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.216893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.216922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.217116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.217315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.217340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.217507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.217670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.217697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.217870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.218088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.218130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.218312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.218473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.218499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.218665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.218844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.218871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.219053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.219210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.219236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.219454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.219665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.219697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.219860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.220071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.220118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.220333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.220496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.220526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.220709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.220892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.220918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.221097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.221323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.221366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.221577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.221742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.221770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.221934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.222173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.222201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.222413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.222624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.222667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.222860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.223068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.223115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.223367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.223540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.223569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.223732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.223924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.223950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.224111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.224271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.224296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.224492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.224653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.224679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.224870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.225105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.225131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.225312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.225472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.225497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.225686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.225943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.225969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.226212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.226389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.226416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.226617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.226815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.226859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.227041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.227245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.227290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.227523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.227736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.227781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.616 qpair failed and we were unable to recover it. 00:20:59.616 [2024-04-24 21:35:25.227942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.616 [2024-04-24 21:35:25.228158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.228199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.228390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.228607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.228637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.228839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.229072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.229114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.229300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.229486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.229513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.229730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.229940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.229983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.230171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.230326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.230351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.230542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.230726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.230756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.231027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.231231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.231273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.231439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.231652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.231679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.231889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.232146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.232174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.232380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.232533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.232559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.232724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.232885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.232910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.233159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.233350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.233375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.233573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.233785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.233829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.234043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.234280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.234306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.234497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.234705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.234731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.234929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.235091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.235121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.235312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.235473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.235500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.235682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.235881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.235909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.236122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.236309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.236334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.236548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.236765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.236809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.237016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.237218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.237245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.237437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.237638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.237664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.237880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.238117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.238146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.238377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.238567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.238592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.238819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.239041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.239083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.617 qpair failed and we were unable to recover it. 00:20:59.617 [2024-04-24 21:35:25.239325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.617 [2024-04-24 21:35:25.239531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.239557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.239748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.239950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.239975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.240167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.240332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.240356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.240596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.240768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.240795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.240981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.241173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.241199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.241392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.241576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.241601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.241831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.242056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.242097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.242325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.242535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.242560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.242770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.242994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.243038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.243293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.243477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.243503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.243720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.243926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.243953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.244125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.244300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.244326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.244508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.244717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.244760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.244951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.245181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.245206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.245409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.245570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.245595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.245821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.246006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.246031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.246199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.246380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.246409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.246581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.246771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.246815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.247059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.247288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.247335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.247522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.247728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.247773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.248017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.248274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.248300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.248490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.248735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.248765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.248951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.249188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.249215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.618 qpair failed and we were unable to recover it. 00:20:59.618 [2024-04-24 21:35:25.249427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.618 [2024-04-24 21:35:25.249612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.249646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.619 qpair failed and we were unable to recover it. 00:20:59.619 [2024-04-24 21:35:25.249837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.250035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.250077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.619 qpair failed and we were unable to recover it. 00:20:59.619 [2024-04-24 21:35:25.250261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.250447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.250471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.619 qpair failed and we were unable to recover it. 00:20:59.619 [2024-04-24 21:35:25.250648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.251780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.251812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.619 qpair failed and we were unable to recover it. 00:20:59.619 [2024-04-24 21:35:25.252047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.252259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.252286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.619 qpair failed and we were unable to recover it. 00:20:59.619 [2024-04-24 21:35:25.252499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.252711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.252754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.619 qpair failed and we were unable to recover it. 00:20:59.619 [2024-04-24 21:35:25.252992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.253223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.253249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.619 qpair failed and we were unable to recover it. 00:20:59.619 [2024-04-24 21:35:25.253437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.253637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.253662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.619 qpair failed and we were unable to recover it. 00:20:59.619 [2024-04-24 21:35:25.253844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.254007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.254032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.619 qpair failed and we were unable to recover it. 00:20:59.619 [2024-04-24 21:35:25.254268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.254444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.254470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.619 qpair failed and we were unable to recover it. 00:20:59.619 [2024-04-24 21:35:25.254638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.254816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.254859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.619 qpair failed and we were unable to recover it. 00:20:59.619 [2024-04-24 21:35:25.255084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.255310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.255354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.619 qpair failed and we were unable to recover it. 00:20:59.619 [2024-04-24 21:35:25.255544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.255737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.255781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.619 qpair failed and we were unable to recover it. 00:20:59.619 [2024-04-24 21:35:25.256017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.256277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.256318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.619 qpair failed and we were unable to recover it. 00:20:59.619 [2024-04-24 21:35:25.256511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.256719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.256763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.619 qpair failed and we were unable to recover it. 00:20:59.619 [2024-04-24 21:35:25.256977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.257195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.257221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.619 qpair failed and we were unable to recover it. 00:20:59.619 [2024-04-24 21:35:25.257409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.257592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.257617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.619 qpair failed and we were unable to recover it. 00:20:59.619 [2024-04-24 21:35:25.257856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.258062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.258106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.619 qpair failed and we were unable to recover it. 00:20:59.619 [2024-04-24 21:35:25.258342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.258545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.258569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.619 qpair failed and we were unable to recover it. 00:20:59.619 [2024-04-24 21:35:25.258744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.258970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.619 [2024-04-24 21:35:25.259014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.619 qpair failed and we were unable to recover it. 00:20:59.619 [2024-04-24 21:35:25.259228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.890 [2024-04-24 21:35:25.259433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.890 [2024-04-24 21:35:25.259476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.890 qpair failed and we were unable to recover it. 00:20:59.890 [2024-04-24 21:35:25.259694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.890 [2024-04-24 21:35:25.259910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.890 [2024-04-24 21:35:25.259953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.890 qpair failed and we were unable to recover it. 00:20:59.890 [2024-04-24 21:35:25.260170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.260399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.260428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.260657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.260873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.260902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.261140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.261377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.261420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.261640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.261814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.261840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.262059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.262322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.262349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.262558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.262735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.262759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.263011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.263215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.263256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.263493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.263707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.263732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.263934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.264159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.264201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.264411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.264609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.264640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.264849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.265033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.265076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.265285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.265485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.265528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.265725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.265914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.265957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.266166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.266408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.266451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.266632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.266810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.266834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.267014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.267212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.267255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.267488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.267688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.267742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.267933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.268189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.268231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.268421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.268593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.268619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.268817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.269060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.269088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.269345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.269519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.269546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.269763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.269983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.270010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.270215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.270398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.270423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.891 qpair failed and we were unable to recover it. 00:20:59.891 [2024-04-24 21:35:25.270639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.270827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.891 [2024-04-24 21:35:25.270870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.271080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.271305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.271349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.271538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.271725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.271750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.271945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.272211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.272252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.272436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.272641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.272667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.272837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.273091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.273132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.273320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.273523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.273549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.273741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.273913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.273955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.274194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.274451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.274477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.274664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.274861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.274908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.275158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.275360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.275388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.275591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.275786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.275830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.276049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.276312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.276353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.276509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.276742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.276786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.276975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.277229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.277270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.277468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.277657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.277683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.277873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.278114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.278142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.278342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.278571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.278595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.278757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.278994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.279021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.279226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.279481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.279527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.279764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.280002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.280030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.280254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.280480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.280521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.280756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.281016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.281058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.281273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.281498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.281524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.281750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.281958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.282002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.282241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.282444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.282470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.892 qpair failed and we were unable to recover it. 00:20:59.892 [2024-04-24 21:35:25.282653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.282872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.892 [2024-04-24 21:35:25.282900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.283160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.283423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.283463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.283652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.283868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.283913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.284100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.284295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.284341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.284550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.284744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.284770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.284975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.285201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.285243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.285457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.285700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.285725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.285915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.286168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.286210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.286452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.286663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.286688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.286871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.287052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.287095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.287305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.287531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.287555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.287766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.287954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.287996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.288245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.288495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.288537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.288716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.288957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.289004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.289222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.289477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.289519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.289711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.289947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.289989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.290172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.290401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.290443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.290662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.290860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.290884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.291102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.291295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.291338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.291546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.291757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.291799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.292059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.292303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.292346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.292531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.292718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.292744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.292956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.293188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.293231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.293447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.293651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.293677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.293892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.294120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.294164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.294396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.294568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.294593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.893 [2024-04-24 21:35:25.294812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.294999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.893 [2024-04-24 21:35:25.295040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.893 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.295251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.295457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.295484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.295744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.295924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.295966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.296207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.296408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.296433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.296594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.296816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.296863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.297073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.297326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.297368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.297551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.297763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.297808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.298013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.298235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.298262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.298463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.298624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.298655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.298839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.299048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.299089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.299330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.299605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.299635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.299819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.300058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.300084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.300292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.300545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.300588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.300828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.301688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.301718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.301955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.302199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.302247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.302437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.302649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.302676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.302879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.303105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.303153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.303434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.303710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.303758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.304031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.304293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.304318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.304544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.304791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.304837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.305027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.305252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.305278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.305475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.305704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.305747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.305943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.306192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.306233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.306429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.306596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.306621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.306838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.307065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.307111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.307380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.307594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.307641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.307908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.308169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.308211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.308452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.308651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.308678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.308919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.309149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.309192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.894 qpair failed and we were unable to recover it. 00:20:59.894 [2024-04-24 21:35:25.309460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.309672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.894 [2024-04-24 21:35:25.309698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.309946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.310195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.310241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.310457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.310639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.310665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.310860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.311042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.311071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.311241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.311399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.311426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.311615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.311863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.311910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.312096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.312364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.312389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.312604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.312829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.312878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.313124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.313374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.313417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.313609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.313843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.313887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.314134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.314363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.314404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.314569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.314797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.314841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.315063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.315287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.315329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.315516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.315707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.315734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.315919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.316161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.316203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.316383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.316566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.316592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.316851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.317079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.317122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.317307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.317496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.317526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.317764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.317969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.318011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.318232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.318429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.318455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.318642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.318836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.318879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.319088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.319326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.319369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.319532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.319713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.895 [2024-04-24 21:35:25.319761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.895 qpair failed and we were unable to recover it. 00:20:59.895 [2024-04-24 21:35:25.320013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.320249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.320276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.320495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.320702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.320746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.320931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.321134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.321176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.321386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.321595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.321619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.321843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.322077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.322119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.322334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.322562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.322586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.322807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.323018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.323060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.323300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.323499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.323525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.323763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.323995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.324038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.324279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.324461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.324486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.324750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.324982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.325025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.325241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.325442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.325468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.325660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.325868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.325909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.326098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.326298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.326341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.326507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.326688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.326718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.326971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.327196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.327242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.327413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.327646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.327672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.327906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.328163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.328206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.328422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.328602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.328633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.328842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.329058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.329099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.329346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.329548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.329574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.329788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.330028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.330071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.330309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.330506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.330530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.330745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.330973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.330999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.331256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.331495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.331520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.331729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.331959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.332000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.332239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.332472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.332514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.332751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.332952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.332995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.896 [2024-04-24 21:35:25.333204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.333461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.896 [2024-04-24 21:35:25.333487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.896 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.333729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.333924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.333967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.334157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.334386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.334430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.334619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.334863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.334890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.335146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.335347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.335388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.335573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.335742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.335768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.336006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.336200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.336244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.336491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.336692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.336717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.336928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.337188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.337230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.337467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.337757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.337800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.338019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.338231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.338273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.338435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.338619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.338656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.338872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.339137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.339178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.339364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.339541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.339566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.339750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.339991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.340033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.340267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.340493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.340535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.340711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.340950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.340978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.341209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.341431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.341477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.341686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.341894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.341935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.342149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.342372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.342416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.342597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.342791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.342817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.342998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.343261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.343305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.343471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.343656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.343683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.343843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.344085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.344128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.344373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.344575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.344600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.344796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.345004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.345046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.345285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.345517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.345542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.345728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.345943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.345991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.346226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.346449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.346492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.897 qpair failed and we were unable to recover it. 00:20:59.897 [2024-04-24 21:35:25.346720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.346954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.897 [2024-04-24 21:35:25.346980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.347182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.347377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.347422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.347635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.347850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.347874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.348095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.348325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.348368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.348553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.348742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.348769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.349004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.349256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.349298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.349479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.349688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.349714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.349922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.350183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.350225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.350438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.350671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.350697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.350911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.351138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.351186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.351403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.351641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.351667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.351829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.352066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.352108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.352285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.352518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.352546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.352762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.352994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.353038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.353335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.353546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.353572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.353762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.354014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.354057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.354256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.354511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.354552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.354740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.354982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.355011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.355261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.355503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.355527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.355773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.355973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.356020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.356269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.356471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.356495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.356693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.356934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.356961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.357190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.357443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.357485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.357703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.357911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.357954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.358169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.358417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.358459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.358683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.358897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.358938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.359152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.359380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.359422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.359606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.359795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.359821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.360092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.360314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.360359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.898 qpair failed and we were unable to recover it. 00:20:59.898 [2024-04-24 21:35:25.360657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.898 [2024-04-24 21:35:25.360878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.360907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.361148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.361370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.361396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.361744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.362052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.362097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.362336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.362550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.362574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.362742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.362953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.362994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.363251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.363465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.363508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.363804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.364023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.364066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.364315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.364552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.364591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.364814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.365013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.365053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.365267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.365511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.365536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.365751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.365983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.366030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.366244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.366468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.366491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.366736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.366936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.366978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.367192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.367448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.367472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.367697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.367937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.367962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.368174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.368394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.368435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.368643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.368845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.368871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.369132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.369387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.369429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.369644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.369854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.369879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.370106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.370306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.370334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.370555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.370741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.370767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.370977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.371238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.371279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.371510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.371761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.371787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.372004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.372225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.372267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.372469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.372703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.372747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.372985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.373235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.373277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.373466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.373660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.373705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.373922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.374176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.374218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.899 [2024-04-24 21:35:25.374437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.374640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.899 [2024-04-24 21:35:25.374666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.899 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.374877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.375109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.375137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.375361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.375589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.375613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.375843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.376087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.376114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.376347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.376549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.376573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.376768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.376981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.377023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.377245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.377503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.377544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.377715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.377932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.377976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.378285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.378702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.378727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.378941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.379170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.379211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.379498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.379730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.379755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.379947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.380200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.380241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.380487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.380718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.380744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.380961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.381258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.381299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.381463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.381683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.381708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.381917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.382156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.382198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.382410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.382613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.382646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.382868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.383083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.383124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.383338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.383540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.383565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.383751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.383956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.383997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.384190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.384418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.384461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.384664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.384905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.384949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.385124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.385382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.385424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.385613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.385841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.900 [2024-04-24 21:35:25.385867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.900 qpair failed and we were unable to recover it. 00:20:59.900 [2024-04-24 21:35:25.386081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.386282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.386323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.386510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.386682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.386707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.386945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.387209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.387249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.387471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.387686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.387711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.387924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.388153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.388181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.388425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.388622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.388655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.388865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.389120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.389162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.389369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.389571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.389595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.389817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.390052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.390080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.390344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.390545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.390570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.390758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.390965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.391008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.391233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.391459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.391502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.391741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.391975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.392017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.392218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.392390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.392415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.392605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.392841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.392870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.393056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.393315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.393357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.393558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.393846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.393888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.394128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.394359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.394403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.394583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.394858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.394883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.395153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.395379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.395423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.395595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.395837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.395863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.396111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.396370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.396411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.396615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.396828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.396868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.397085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.397371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.397414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.397610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.397837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.397863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.398071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.398307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.398335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.398547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.398763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.398789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.399009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.399241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.399271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.399515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.399723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.901 [2024-04-24 21:35:25.399748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:20:59.901 qpair failed and we were unable to recover it. 00:20:59.901 [2024-04-24 21:35:25.399992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.400191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.400222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.400433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.400682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.400708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.400932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.401156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.401183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.401414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.401618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.401654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.401867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.402108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.402136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.402345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.402572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.402597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.402812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.403002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.403030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.403266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.403455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.403482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.403716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.403898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.403923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.404151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.404379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.404407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.404615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.404880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.404921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.405094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.405261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.405289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.405469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.405685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.405710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.405950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.406181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.406208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.406411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.406646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.406688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.406925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.407106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.407133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.407335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.407541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.407568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.407758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.407965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.407993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.408225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.408391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.408418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.408622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.408829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.408853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.409038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.409244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.409275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.409480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.409726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.409752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.409914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.410070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.410110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.410283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.410454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.410481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.410679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.410862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.410886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.411080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.411320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.411347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.411576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.411778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.411804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.411965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.412142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.412182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.412383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.412587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.902 [2024-04-24 21:35:25.412614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.902 qpair failed and we were unable to recover it. 00:20:59.902 [2024-04-24 21:35:25.412854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.413068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.413095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.413292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.413500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.413532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.413746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.413976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.414003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.414212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.414439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.414466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.414710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.414935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.414962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.415159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.415362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.415389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.415593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.415803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.415829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.416019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.416217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.416244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.416449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.416621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.416675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.416878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.417055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.417083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.417286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.417524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.417552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.417777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.417941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.417966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.418153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.418359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.418384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.418625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.418832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.418859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.419088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.419293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.419320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.419491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.419730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.419757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.419986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.420184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.420211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.420417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.420615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.420648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.420844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.421049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.421076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.421258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.421486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.421513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.421687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.421921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.421950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.422158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.422424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.422452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.422654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.422883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.422910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.423124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.423306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.423333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.423514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.423716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.423744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.423945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.424146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.424174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.424408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.424652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.424678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.424891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.425115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.425142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.425375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.425581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.903 [2024-04-24 21:35:25.425608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.903 qpair failed and we were unable to recover it. 00:20:59.903 [2024-04-24 21:35:25.425813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.425979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.426006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.426218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.426436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.426463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.426692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.426896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.426924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.427128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.427355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.427379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.427616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.427793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.427820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.428023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.428219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.428246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.428453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.428648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.428673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.428904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.429069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.429096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.429300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.429503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.429531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.429761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.429974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.430002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.430188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.430375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.430399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.430622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.430832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.430858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.431088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.431257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.431285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.431498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.431678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.431704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.431916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.432121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.432149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.432378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.432578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.432604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.432789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.432997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.433021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.433207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.433420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.433445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.433618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.433827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.433852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.434037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.434232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.434260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.434462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.434664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.434689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.434945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.435152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.435176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.435332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.435490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.435515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.435773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.435937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.435966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.436156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.436363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.436387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.436610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.436807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.436832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.437026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.437210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.437235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.437417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.437634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.437676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.904 qpair failed and we were unable to recover it. 00:20:59.904 [2024-04-24 21:35:25.437891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.438117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.904 [2024-04-24 21:35:25.438141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.438306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.438487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.438511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.438722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.438915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.438941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.439126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.439334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.439359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.439508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.439708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.439734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.439895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.440054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.440078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.440292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.440470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.440495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.440674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.440862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.440887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.441068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.441282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.441310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.441490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.441656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.441683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.441924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.442138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.442163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.442351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.442584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.442611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.442825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.443013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.443056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.443269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.443459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.443483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.443671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.443892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.443917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.444101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.444316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.444341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.444533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.444719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.444744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.444903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.445062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.445086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.445299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.445538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.445565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.445746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.445917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.445944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.446148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.446377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.446402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.446566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.446725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.446752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.446974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.447156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.447198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.447382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.447534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.447560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.447747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.447937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.447962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.905 qpair failed and we were unable to recover it. 00:20:59.905 [2024-04-24 21:35:25.448146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.448365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.905 [2024-04-24 21:35:25.448390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.448579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.448737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.448762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.448978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.449163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.449187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.449348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.449565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.449589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.449751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.449960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.449985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.450164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.450362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.450386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.450570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.450765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.450790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.450973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.451169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.451193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.451363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.451547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.451572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.451762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.451950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.451975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.452154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.452309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.452334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.452543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.452734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.452759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.452941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.453183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.453208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.453367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.453552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.453576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.453786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.453955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.453982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.454219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.454409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.454434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.454620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.454847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.454871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.455060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.455244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.455269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.455431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.455679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.455704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.455886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.456088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.456112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.456298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.456513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.456542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.456757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.456939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.456968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.457129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.457312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.457336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.457486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.457668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.457693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.457881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.458043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.458067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.458231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.458390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.458414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.458620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.458805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.458830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.458985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.459170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.459194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.459382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.459564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.459589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.906 qpair failed and we were unable to recover it. 00:20:59.906 [2024-04-24 21:35:25.459750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.906 [2024-04-24 21:35:25.459932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.459957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.460159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.460369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.460393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.460581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.460773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.460800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.460960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.461138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.461166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.461349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.461538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.461562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.461749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.461907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.461932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.462166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.462398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.462440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.462632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.462784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.462808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.462993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.463178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.463202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.463386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.463558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.463599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.463821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.463975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.464000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.464209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.464391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.464415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.464578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.464783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.464812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.464996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.465205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.465230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.465415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.465596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.465621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.465830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.466096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.466121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.466334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.466525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.466552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.466772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.467043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.467070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.467253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.467440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.467464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.467657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.467817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.467842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.467991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.468147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.468173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.468384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.468569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.468594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.468821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.469118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.469142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.469309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.469519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.469545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.469794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.469979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.470004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.470211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.470386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.470415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.470596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.470812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.470838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.471022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.471260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.471285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.471469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.471678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.471706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.907 qpair failed and we were unable to recover it. 00:20:59.907 [2024-04-24 21:35:25.471909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.907 [2024-04-24 21:35:25.472109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.472136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.472367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.472576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.472600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.472816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.473031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.473058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.473266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.473420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.473445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.473650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.473856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.473880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.474084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.474284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.474311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.474516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.474688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.474716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.474923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.475132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.475174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.475407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.475639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.475676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.475858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.476066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.476090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.476300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.476511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.476538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.476762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.476991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.477019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.477224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.477429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.477457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.477660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.477844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.477874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.478075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.478246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.478279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.478511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.478722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.478750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.478933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.479162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.479189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.479374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.479574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.479601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.479813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.479992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.480019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.480228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.480430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.480457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.480659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.480892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.480919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.481117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.481310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.481338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.481520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.481734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.481762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.481971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.482149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.482178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.482383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.482608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.482646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.482857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.483052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.483079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.483250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.483448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.483477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.483659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.483840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.483865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.484052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.484264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.484288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.908 qpair failed and we were unable to recover it. 00:20:59.908 [2024-04-24 21:35:25.484505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.908 [2024-04-24 21:35:25.484716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.484740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.484923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.485150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.485177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.485413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.485615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.485650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.485821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.486021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.486047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.486250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.486479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.486506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.486711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.486920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.486948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.487154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.487384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.487412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.487615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.487852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.487879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.488080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.488305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.488331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.488559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.488756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.488783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.488988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.489167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.489194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.489358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.489555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.489581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.489824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.489996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.490023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.490228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.490429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.490457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.490665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.490894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.490921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.491162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.491363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.491390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.491598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.491816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.491841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.492032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.492213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.492241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.492447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.492669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.492694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.492854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.493082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.493110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.493312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.493519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.493544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.493762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.493948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.493976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.494163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.494322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.494346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.494521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.494722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.494750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.494916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.495117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.909 [2024-04-24 21:35:25.495146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.909 qpair failed and we were unable to recover it. 00:20:59.909 [2024-04-24 21:35:25.495350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.495554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.495580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.495790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.495990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.496017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.496189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.496362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.496390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.496615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.496850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.496874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.497052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.497244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.497272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.497505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.497712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.497740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.497917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.498116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.498143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.498349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.498548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.498576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.498778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.498984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.499012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.499220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.499426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.499453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.499638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.499846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.499873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.500080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.500283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.500316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.500545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.500706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.500733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.500910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.501147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.501174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.501400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.501635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.501663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.501840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.502012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.502041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.502244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.502475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.502502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.502733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.502938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.502966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.503173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.503378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.503405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.503608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.503793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.503820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.504029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.504257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.504284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.504517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.504726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.504758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.504960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.505191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.505218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.505405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.505608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.505641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.505823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.505998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.506026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.506234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.506405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.506431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.506663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.506873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.506898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.507215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.507570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.507626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.507838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.508063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.910 [2024-04-24 21:35:25.508090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.910 qpair failed and we were unable to recover it. 00:20:59.910 [2024-04-24 21:35:25.508313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.508558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.508585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.508796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.509029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.509056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.509259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.509487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.509514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.509704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.509881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.509909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.510118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.510328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.510355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.510559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.510752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.510779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.510961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.511162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.511190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.511428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.511614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.511644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.511854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.512010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.512033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.512219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.512429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.512456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.512663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.512840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.512867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.513053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.513282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.513309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.513498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.513687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.513715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.513946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.514140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.514167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.514367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.514573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.514597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.514798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.514988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.515016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.515250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.515463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.515490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.515706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.515868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.515892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.516140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.516347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.516375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.516577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.516789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.516818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.517020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.517227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.517254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.517455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.517688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.517716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.517979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.518220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.518247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.518476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.518693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.518721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.518922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.519090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.519116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.519347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.519547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.519574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.519760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.519967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.519995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.520187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.520385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.520412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.520648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.520858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.911 [2024-04-24 21:35:25.520885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.911 qpair failed and we were unable to recover it. 00:20:59.911 [2024-04-24 21:35:25.521091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.521296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.521323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.521550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.521724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.521751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.521962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.522149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.522174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.522357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.522564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.522593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.522815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.523043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.523070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.523256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.523460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.523488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.523695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.523881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.523907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.524087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.524266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.524290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.524473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.524706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.524734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.524942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.525150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.525174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.525351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.525559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.525582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.525773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.525984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.526011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.526238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.526437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.526464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.526670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.526877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.526901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.527135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.527340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.527371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.527644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.527884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.527911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.528117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.528295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.528323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.528550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.528758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.528786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.528961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.529170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.529197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.529409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.529617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.529647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.529853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.530066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.530094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.530298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.530515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.530542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.530730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.530930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.530957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.531158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.531336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.531360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.531570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.531791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.531819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.532000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.532195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.532222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.532453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.532652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.532681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.532882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.533090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.533114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.533346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.533549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.533578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.912 qpair failed and we were unable to recover it. 00:20:59.912 [2024-04-24 21:35:25.533769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.912 [2024-04-24 21:35:25.533999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.534026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.534196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.534424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.534451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.534661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.534868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.534894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.535123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.535299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.535326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.535556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.535796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.535821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.536024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.536191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.536217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.536452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.536669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.536693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.536879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.537095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.537122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.537331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.537535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.537562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.537778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.537974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.538000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.538204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.538403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.538431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.538654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.538859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.538887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.539121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.539329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.539356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.539551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.539749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.539775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.539982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.540191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.540218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.540418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.540622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.540659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.540861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.541069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.541094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.541276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.541474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.541500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.541743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.541980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.542007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.542211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.542421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.542450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.542623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.542843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.542869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.543053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.543280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.543307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.543544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.543781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.543808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.543981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.544216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.544243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.544438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.544672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.544700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.544910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.545113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.545140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.545340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.545505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.545530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.545737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.545921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.545946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.546139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.546383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.546408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.546583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.546791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.546819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.913 [2024-04-24 21:35:25.547016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.547245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.913 [2024-04-24 21:35:25.547273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.913 qpair failed and we were unable to recover it. 00:20:59.914 [2024-04-24 21:35:25.547471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.547667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.547694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.914 qpair failed and we were unable to recover it. 00:20:59.914 [2024-04-24 21:35:25.547900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.548109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.548137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.914 qpair failed and we were unable to recover it. 00:20:59.914 [2024-04-24 21:35:25.548365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.548536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.548565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.914 qpair failed and we were unable to recover it. 00:20:59.914 [2024-04-24 21:35:25.548801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.549009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.549034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.914 qpair failed and we were unable to recover it. 00:20:59.914 [2024-04-24 21:35:25.549195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.549374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.549400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.914 qpair failed and we were unable to recover it. 00:20:59.914 [2024-04-24 21:35:25.549612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.549808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.549836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.914 qpair failed and we were unable to recover it. 00:20:59.914 [2024-04-24 21:35:25.550073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.550274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.550301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.914 qpair failed and we were unable to recover it. 00:20:59.914 [2024-04-24 21:35:25.550506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.550705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.550730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.914 qpair failed and we were unable to recover it. 00:20:59.914 [2024-04-24 21:35:25.550982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.551224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.551251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.914 qpair failed and we were unable to recover it. 00:20:59.914 [2024-04-24 21:35:25.551431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.551678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.551703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.914 qpair failed and we were unable to recover it. 00:20:59.914 [2024-04-24 21:35:25.551924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.552137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.552160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.914 qpair failed and we were unable to recover it. 00:20:59.914 [2024-04-24 21:35:25.552346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.552582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.552610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.914 qpair failed and we were unable to recover it. 00:20:59.914 [2024-04-24 21:35:25.552829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.553032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.553059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.914 qpair failed and we were unable to recover it. 00:20:59.914 [2024-04-24 21:35:25.553231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.553426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.553452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.914 qpair failed and we were unable to recover it. 00:20:59.914 [2024-04-24 21:35:25.553693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.553900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.553927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.914 qpair failed and we were unable to recover it. 00:20:59.914 [2024-04-24 21:35:25.554098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.554298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.554325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.914 qpair failed and we were unable to recover it. 00:20:59.914 [2024-04-24 21:35:25.554534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.554740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.914 [2024-04-24 21:35:25.554768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:20:59.914 qpair failed and we were unable to recover it. 00:20:59.914 [2024-04-24 21:35:25.554995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.555171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.555199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.208 qpair failed and we were unable to recover it. 00:21:00.208 [2024-04-24 21:35:25.555411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.555585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.555611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.208 qpair failed and we were unable to recover it. 00:21:00.208 [2024-04-24 21:35:25.555849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.556030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.556057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.208 qpair failed and we were unable to recover it. 00:21:00.208 [2024-04-24 21:35:25.556281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.556457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.556484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.208 qpair failed and we were unable to recover it. 00:21:00.208 [2024-04-24 21:35:25.556671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.556884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.556908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.208 qpair failed and we were unable to recover it. 00:21:00.208 [2024-04-24 21:35:25.557099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.557301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.557329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.208 qpair failed and we were unable to recover it. 00:21:00.208 [2024-04-24 21:35:25.557536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.557721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.557746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.208 qpair failed and we were unable to recover it. 00:21:00.208 [2024-04-24 21:35:25.557999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.558229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.558256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.208 qpair failed and we were unable to recover it. 00:21:00.208 [2024-04-24 21:35:25.558428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.558639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.558664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.208 qpair failed and we were unable to recover it. 00:21:00.208 [2024-04-24 21:35:25.558842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.559001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.559024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.208 qpair failed and we were unable to recover it. 00:21:00.208 [2024-04-24 21:35:25.559210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.559454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.559482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.208 qpair failed and we were unable to recover it. 00:21:00.208 [2024-04-24 21:35:25.559677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.559886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.559927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.208 qpair failed and we were unable to recover it. 00:21:00.208 [2024-04-24 21:35:25.560102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.560308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.560336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.208 qpair failed and we were unable to recover it. 00:21:00.208 [2024-04-24 21:35:25.560512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.560723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.560751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.208 qpair failed and we were unable to recover it. 00:21:00.208 [2024-04-24 21:35:25.560960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.561170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.561197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.208 qpair failed and we were unable to recover it. 00:21:00.208 [2024-04-24 21:35:25.561408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.561645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.208 [2024-04-24 21:35:25.561670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.208 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.561883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.562100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.562127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.562302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.562507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.562531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.562739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.562977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.563004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.563233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.563421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.563448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.563652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.563880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.563906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.564085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.564286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.564315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.564518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.564683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.564709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.564910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.565152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.565179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.565360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.565535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.565563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.565778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.565979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.566006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.566189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.566405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.566432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.566644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.566820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.566847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.567026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.567252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.567279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.567479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.567683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.567711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.567911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.568115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.568141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.568372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.568667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.568695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.568900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.569096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.569123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.569357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.569533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.569561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.569745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.569946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.569973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.570204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.570411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.570435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.570626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.570847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.570871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.571088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.571322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.571349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.571581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.571821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.571849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.572046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.572268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.572299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.572476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.572681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.572709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.572941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.573121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.573149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.573355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.573561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.573588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.573784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.573971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.573998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.574186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.574404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.574442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.574624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.574875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.574899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.575084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.575254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.575281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.575512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.575746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.575771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.209 [2024-04-24 21:35:25.575968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.576176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.209 [2024-04-24 21:35:25.576203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.209 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.576406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.576578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.576610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.576826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.577015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.577039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.577277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.577481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.577508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.577727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.577890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.577932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.578170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.578397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.578424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.578670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.578878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.578907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.579112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.579342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.579369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.579580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.579815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.579841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.580064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.580261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.580287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.580474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.580713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.580740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.580939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.581166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.581193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.581406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.581634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.581662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.581887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.582127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.582154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.582363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.582523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.582547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.582729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.583024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.583051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.583417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.583645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.583673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.583893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.584073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.584098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.584311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.584517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.584544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.584773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.584936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.584962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.585183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.585351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.585378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.585558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.585794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.585821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.586009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.586241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.586267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.586472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.586675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.586703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.586904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.587128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.587156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.587361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.587561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.587589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.587825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.588008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.588037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.588275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.588470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.588498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.588702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.588910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.588946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.589161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.589344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.589368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.589580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.589811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.589838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.590054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.590253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.590281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.590463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.590670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.590699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.590926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.591097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.591123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.210 qpair failed and we were unable to recover it. 00:21:00.210 [2024-04-24 21:35:25.591330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.210 [2024-04-24 21:35:25.591486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.591528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.591741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.591924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.591948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.592151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.592355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.592382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.592587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.592808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.592836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.593040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.593214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.593241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.593453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.593658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.593682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.593898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.594072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.594099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.594308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.594490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.594519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.594764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.594976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.595004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.595213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.595428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.595456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.595650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.595853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.595879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.596112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.596286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.596313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.596549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.596741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.596767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.596983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.597185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.597211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.597398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.597580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.597604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.597824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.598055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.598082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.598295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.598450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.598474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.598665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.598897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.598925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.599155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.599361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.599395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.599603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.599807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.599835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.600050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.600258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.600281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.600463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.600669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.600697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.600929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.601155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.601182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.601382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.601587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.601611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.601781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.601990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.602017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.602252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.602450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.602477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.602691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.603014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.603042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.603245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.603486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.603511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.603700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.603879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.603907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.604157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.604334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.604360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.604598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.604839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.604866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.605104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.605305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.605332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.605568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.605754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.605781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.605948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.606153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.211 [2024-04-24 21:35:25.606179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.211 qpair failed and we were unable to recover it. 00:21:00.211 [2024-04-24 21:35:25.606409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.606586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.606619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.606838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.607045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.607072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.607277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.607480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.607508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.607738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.607917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.607943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.608123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.608349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.608376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.608580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.608821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.608846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.609017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.609221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.609249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.609478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.609711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.609738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.609965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.610163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.610190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.610420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.610649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.610677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.610859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.611066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.611094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.611291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.611516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.611542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.611756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.611956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.611983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.612183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.612412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.612439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.612646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.612849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.612876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.613107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.613308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.613336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.613565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.613771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.613798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.614000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.614208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.614235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.614443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.614685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.614714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.614915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.615113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.615139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.615352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.615525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.615553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.615749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.615952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.615980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.616219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.616429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.616456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.616664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.616863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.616890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.617092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.617288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.617315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.617482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.617658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.617685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.617862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.618099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.618126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.212 [2024-04-24 21:35:25.618354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.618584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.212 [2024-04-24 21:35:25.618611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.212 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.618847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.619031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.619059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.619251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.619480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.619507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.619715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.619921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.619949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.620176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.620404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.620431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.620638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.620863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.620890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.621093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.621294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.621321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.621551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.621766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.621793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.621974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.622175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.622207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.622396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.622550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.622576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.622774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.622973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.623001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.623206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.623441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.623468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.623669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.623857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.623885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.624109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.624332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.624360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.624587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.624774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.624802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.625029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.625210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.625234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.625463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.625709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.625737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.625919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.626121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.626149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.626322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.626554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.626581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.626805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.627014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.627038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.627273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.627505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.627530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.627770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.627977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.628002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.628190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.628375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.628401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.628641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.628846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.628870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.629114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.629323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.629350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.629539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.629775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.629803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.630014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.630222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.630250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.630478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.630688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.630716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.630913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.631145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.631172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.631393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.631590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.631618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.631808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.632036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.632063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.632262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.632456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.632482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.632662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.632917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.632941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.633129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.633327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.633351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.213 [2024-04-24 21:35:25.633583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.633815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.213 [2024-04-24 21:35:25.633843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.213 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.634067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.634231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.634255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.634458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.634690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.634719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.634931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.635100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.635128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.635339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.635545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.635573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.635758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.635925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.635951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.636190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.636390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.636417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.636594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.636776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.636804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.636976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.637153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.637181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.637416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.637621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.637657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.637888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.638089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.638116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.638340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.638539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.638565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.638762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.638941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.638964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.639153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.639355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.639382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.639556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.639743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.639771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.639965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.640172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.640196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.640383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.640539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.640580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.640794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.641005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.641029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.641204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.641405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.641434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.641663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.641863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.641891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.642081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.642310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.642337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.642579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.642738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.642763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.642920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.643075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.643115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.643391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.643596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.643624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.643844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.644033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.644058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.644269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.644513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.644542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.644756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.644961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.644989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.645190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.645390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.645418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.645591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.645836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.645863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.646066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.646270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.646297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.646501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.646701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.646729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.646933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.647164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.647193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.647399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.647603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.647645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.647848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.648062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.648089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.214 [2024-04-24 21:35:25.648292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.648491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.214 [2024-04-24 21:35:25.648517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.214 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.648753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.648930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.648961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.649171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.649340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.649367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.649575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.649769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.649794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.650011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.650242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.650266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.650494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.650702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.650730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.650910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.651115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.651142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.651348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.651555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.651582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.651793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.651976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.652001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.652155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.652333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.652356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.652560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.652780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.652805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.652958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.653191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.653218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.653448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.653654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.653695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.653883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.654045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.654084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.654258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.654444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.654468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.654627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.654863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.654890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.655115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.655341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.655368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.655588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.655834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.655859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.656091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.656320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.656345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.656554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.656731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.656759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.656988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.657159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.657190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.657398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.657577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.657605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.657821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.658028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.658055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.658263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.658563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.658590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.658826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.658990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.659014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.659170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.659352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.659376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.659585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.659818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.659860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.660103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.660281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.660309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.660510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.660735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.660763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.660966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.661162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.661189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.661417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.661621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.661653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.661852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.662055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.662082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.662286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.662487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.662514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.662727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.662958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.662985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.663198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.663411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.663436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.215 [2024-04-24 21:35:25.663616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.663859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.215 [2024-04-24 21:35:25.663887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.664093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.664269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.664296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.664503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.664709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.664738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.664943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.665117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.665144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.665341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.665504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.665530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.665734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.665965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.665990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.666201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.666427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.666453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.666657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.666874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.666901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.667079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.667279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.667306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.667507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.667735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.667763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.667981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.668179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.668206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.668408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.668644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.668672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.668884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.669072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.669096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.669304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.669529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.669556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.669772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.669956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.669982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.670217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.670384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.670411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.670623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.670839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.670864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.671077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.671311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.671343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.671552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.671755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.671783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.672026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.672262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.672287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.672500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.672696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.672722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 2681256 Killed "${NVMF_APP[@]}" "$@" 00:21:00.216 [2024-04-24 21:35:25.672918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.673073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.673097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 21:35:25 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:21:00.216 21:35:25 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:21:00.216 [2024-04-24 21:35:25.673309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 21:35:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:00.216 [2024-04-24 21:35:25.673471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.673495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 21:35:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:00.216 21:35:25 -- common/autotest_common.sh@10 -- # set +x 00:21:00.216 [2024-04-24 21:35:25.673684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.673890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.673915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.674073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.674273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.674299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.674540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.674745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.674773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.674986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.675188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.675220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.675420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.675655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.675694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.675871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.676101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.676129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.676307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.676540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.676567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-04-24 21:35:25.676772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.677001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.216 [2024-04-24 21:35:25.677029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 21:35:25 -- nvmf/common.sh@470 -- # nvmfpid=2681809 00:21:00.217 21:35:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:21:00.217 [2024-04-24 21:35:25.677211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 21:35:25 -- nvmf/common.sh@471 -- # waitforlisten 2681809 00:21:00.217 [2024-04-24 21:35:25.677439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.677466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 21:35:25 -- common/autotest_common.sh@817 -- # '[' -z 2681809 ']' 00:21:00.217 21:35:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.217 [2024-04-24 21:35:25.677698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 21:35:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:00.217 [2024-04-24 21:35:25.677879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 21:35:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.217 [2024-04-24 21:35:25.677904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.678093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 21:35:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:00.217 21:35:25 -- common/autotest_common.sh@10 -- # set +x 00:21:00.217 [2024-04-24 21:35:25.678304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.678328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.678540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.678712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.678738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.678951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.679186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.679211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.679384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.679588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.679614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.679841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.680050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.680080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.680283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.680460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.680487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.680707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.680893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.680918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.681080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.681289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.681313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.681566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.681738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.681762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.681941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.682123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.682166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.682389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.682574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.682599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.682797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.682964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.682997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.683217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.683408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.683433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.683660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.683834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.683860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.684081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.684259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.684283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.684492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.684685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.684710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.684907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.685100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.685125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.685329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.685487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.685510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.685702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.685856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.685880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.686044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.686273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.686299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.686532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.686733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.686757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.686943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.687124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.687148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.687357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.687568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.687592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.687786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.687942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.687969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.688169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.688363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.688387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.688623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.688813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.688838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.689028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.689220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.689248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.689521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.689735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.689760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.689968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.690167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.690192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.690400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.690558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.690582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.690756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.690916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.690941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.691130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.691318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.691342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-04-24 21:35:25.691524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.217 [2024-04-24 21:35:25.691712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.691738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.691936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.692116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.692141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.692330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.692484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.692508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.692691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.692881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.692905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.693067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.693259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.693284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.693461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.693642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.693667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.693833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.693991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.694016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.694230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.694378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.694402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.694566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.694750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.694775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.694944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.695095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.695120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.695303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.695486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.695515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.695702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.695887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.695912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.696100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.696289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.696313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.696472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.696657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.696683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.696869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.697067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.697091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.697257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.697480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.697505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.697690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.697878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.697902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.698066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.698274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.698298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.698483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.698672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.698696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.698907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.699086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.699111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.699325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.699502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.699526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.699703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.699865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.699891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.700138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.700300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.700326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.700489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.700651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.700676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.700864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.701054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.701078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.701260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.701429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.701454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.701639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.701831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.701855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.702075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.702295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.702320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.702503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.702667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.702692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.702884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.703070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.703093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.703277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.703473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.703498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.703683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.703906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.703931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.704111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.704300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.704324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.704518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.704702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.704728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.704889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.705076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.705100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.705259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.705447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.705471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.705691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.705844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.705867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.218 qpair failed and we were unable to recover it. 00:21:00.218 [2024-04-24 21:35:25.706089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.218 [2024-04-24 21:35:25.706277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.706302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.706487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.706670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.706696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.706864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.707083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.707108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.707291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.707500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.707525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.707715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.707901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.707925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.708081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.708240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.708268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.708480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.708641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.708666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.708876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.709060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.709085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.709294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.709461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.709486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.709669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.709855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.709881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.710034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.710198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.710222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.710410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.710569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.710593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.710761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.710943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.710968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.711196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.711351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.711376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.711570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.711728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.711753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.711938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.712092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.712116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.712278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.712433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.712457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.712645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.712827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.712851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.713044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.713193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.713217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.713423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.713619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.713649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.713809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.713965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.713990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.714152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.714305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.714330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.714516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.714706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.714731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.714887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.715067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.715092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.715301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.715458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.715487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.715670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.715859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.715884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.716044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.716232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.716256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.716467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.716653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.716679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.716896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.717074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.717098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.717255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.717418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.717443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.717661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.717823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.717849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.718034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.718222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.718247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.718428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.718578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.718603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.718763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.718947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.718972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.219 qpair failed and we were unable to recover it. 00:21:00.219 [2024-04-24 21:35:25.719159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.219 [2024-04-24 21:35:25.719356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.719385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.719566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.719731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.719757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.719971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.720124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.720149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 wit[2024-04-24 21:35:25.720125] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:21:00.220 h addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.720198] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.220 [2024-04-24 21:35:25.720332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.720491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.720514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.720677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.720869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.720893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.721064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.721259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.721283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.721512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.721692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.721719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.721910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.722069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.722094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.722279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.722445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.722471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.722649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.722861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.722885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.723070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.723254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.723279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.723453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.723706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.723731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.723930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.724089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.724113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.724305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.724492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.724517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.724711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.724899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.724923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.725083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.725295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.725321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.725535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.725693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.725719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.725904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.726056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.726081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.726243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.726397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.726421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.726610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.726776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.726802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.727031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.727216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.727240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.727408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.727592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.727616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.727821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.727978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.728002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.728169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.728329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.728353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.728513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.728740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.728764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.728950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.729131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.729157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.729339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.729526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.729550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.729713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.729926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.729951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.730139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.730420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.730447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.730736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.730897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.730922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.731116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.731280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.731304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.731493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.731709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.731735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.731895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.732055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.732079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.732271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.732458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.732485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.732681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.732901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.732926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.733083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.733270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.733295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.733481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.733655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.733680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.733841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.734055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.734080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.220 qpair failed and we were unable to recover it. 00:21:00.220 [2024-04-24 21:35:25.734280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.220 [2024-04-24 21:35:25.734463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.734487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.734653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.734816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.734842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.735033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.735216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.735244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.735400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.735583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.735607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.735741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfc860 is same with the state(5) to be set 00:21:00.221 [2024-04-24 21:35:25.735979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.736186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.736215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.736403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.736561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.736587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.736812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.736993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.737019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.737200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.737467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.737494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.737684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.737851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.737877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.738089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.738317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.738342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.738525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.738716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.738742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.738937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.739092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.739118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.739277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.739475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.739502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.739692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.739879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.739905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.740090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.740312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.740337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.740535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.740730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.740756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.740926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.741110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.741136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.741349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.741539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.741564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.741768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.741951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.741978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.742193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.742388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.742413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.742583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.742780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.742806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.742966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.743121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.743147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.743343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.743540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.743566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.743744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.743960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.743985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.744175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.744385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.744410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.744597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.744776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.744802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.745017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.745192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.745217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.745375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.745538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.745565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.745746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.745932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.745958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.746123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.746341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.746366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.746580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.746776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.746802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.746997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.747164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.747191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.747385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.747603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.747633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.747819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.748014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.748040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.748195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.748409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.748434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.748617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.748784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.748810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.749008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.749193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.749218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.221 qpair failed and we were unable to recover it. 00:21:00.221 [2024-04-24 21:35:25.749426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.221 [2024-04-24 21:35:25.749612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.749651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.749844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.750038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.750063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.750244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.750443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.750469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.750662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.750823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.750850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.751023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.751186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.751211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.751394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.751585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.751618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.751813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.751998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.752024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.752184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.752368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.752394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.752580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.752767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.752792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.753010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.753193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.753219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.753435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.753622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.753657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.753871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.754090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.754116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.754334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.754514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.754539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.754726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.754913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.754940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.755127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.755340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.755364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.755583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.755816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.755846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.756016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.756214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.756240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.756429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.756616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.756647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.756860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.222 [2024-04-24 21:35:25.757075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.757102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.757287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.757500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.757525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.757741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.757914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.757940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.758155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.758356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.758381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.758600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.758807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.758832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.759045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.759227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.759252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.759413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.759622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.759665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.759856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.760047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.760072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.760271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.760450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.760476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.760666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.760854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.760879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.761073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.761243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.761268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.761432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.761644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.761670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.761833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.762017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.762042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.762207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.762364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.762389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.762576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.762768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.762793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.762959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.763114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.763156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.763382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.763564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.763591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.763804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.764070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.764099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.764290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.764478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.764503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.764684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.764862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.764887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.765078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.765238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.765265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.765467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.765633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.222 [2024-04-24 21:35:25.765658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.222 qpair failed and we were unable to recover it. 00:21:00.222 [2024-04-24 21:35:25.765821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.766037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.766061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.766244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.766455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.766480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.766638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.766830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.766855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.767039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.767220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.767245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.767428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.767616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.767646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.767835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.768039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.768068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.768283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.768469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.768494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.768709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.768899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.768924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.769137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.769295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.769321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.769533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.769692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.769720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.769922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.770108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.770133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.770324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.770558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.770583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.770750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.770908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.770934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.771154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.771340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.771366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.771552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.771717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.771743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.771968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.772130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.772167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.772383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.772549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.772576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.772778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.772966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.772992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.773150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.773334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.773359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.773625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.773897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.773922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.774111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.774268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.774294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.774454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.774644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.774670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.774883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.775049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.775074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.775263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.775445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.775469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.775659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.775843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.775868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.776044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.776258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.776284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.776502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.776663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.776691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.776906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.777068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.777093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.777304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.777463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.777488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.777681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.777866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.777892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.778086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.778298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.778322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.778512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.778723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.778749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.778936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.779155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.779180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.779340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.779521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.779547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.779729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.779913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.779938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.223 [2024-04-24 21:35:25.780150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.780362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.223 [2024-04-24 21:35:25.780387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.223 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.780580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.780790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.780815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.781000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.781181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.781205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.781416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.781607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.781637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.781802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.782003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.782029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.782238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.782401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.782426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.782590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.782804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.782830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.783023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.783180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.783206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.783434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.783622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.783665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.783821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.784002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.784027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.784217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.784439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.784464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.784657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.784868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.784893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.785090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.785309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.785335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.785518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.785688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.785714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.785901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.786081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.786107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.786295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.786484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.786509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.786672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.786837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.786864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.787084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.787268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.787293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.787478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.787660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.787700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.787864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.788065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.788091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.788273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.788491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.788516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.788751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.788914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.788939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.789128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.789310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.789335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.789498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.789682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.789708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.789885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.790104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.790129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.790343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.790524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.790550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.790734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.790947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.790971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.791149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.791338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.791363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 [2024-04-24 21:35:25.791360] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.791549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.791714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.791740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.791918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.792183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.792208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.792403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.792615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.792648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.792938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.793103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.793128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.793316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.793472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.793498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.793686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.793843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.793869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.794083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.794294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.794319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.794508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.794674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.794700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.794890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.795052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.795077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.795257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.795471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.795496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.795810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.796020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.796045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.796260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.796420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.796445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.796642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.796844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.796869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.224 qpair failed and we were unable to recover it. 00:21:00.224 [2024-04-24 21:35:25.797069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.797258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.224 [2024-04-24 21:35:25.797284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.797501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.797666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.797692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.797903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.798087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.798112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.798276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.798459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.798485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.798706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.798863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.798888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.799103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.799261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.799287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.799475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.799769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.799795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.799984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.800155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.800182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.800370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.800553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.800579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.800761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.800957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.800982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.801174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.801366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.801392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.801686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.801885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.801912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.802128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.802316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.802341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.802606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.802798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.802826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.803013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.803224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.803250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.803434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.803650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.803675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.803840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.804055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.804082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.804281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.804477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.804502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.804660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.804831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.804858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.805041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.805235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.805260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.805432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.805593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.805619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.805804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.805992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.806033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.806240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.806430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.806455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.806644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.806831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.806856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.807041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.807230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.807255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.807421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.807608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.807641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.807827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.808023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.808048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.808260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.808420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.808444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.808667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.808832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.808857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.809037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.809249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.809274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.809492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.809651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.809678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.809862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.810026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.810068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.810267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.810428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.810468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.810759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.810947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.810972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.811183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.811347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.811373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.811586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.811754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.811780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.812061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.812280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.812305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.812489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.812675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.812701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.812892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.813067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.813093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.813307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.813467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.813492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.813679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.813892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.813916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.225 qpair failed and we were unable to recover it. 00:21:00.225 [2024-04-24 21:35:25.814109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.814296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.225 [2024-04-24 21:35:25.814321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.814538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.814724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.814751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.815018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.815220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.815244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.815449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.815663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.815689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.815878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.816153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.816178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.816365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.816584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.816609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.816827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.817014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.817040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.817226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.817508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.817534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.817691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.817856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.817882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.818076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.818299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.818324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.818569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.818836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.818862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.819073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.819258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.819282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.819441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.819672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.819697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.819884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.820093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.820118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.820274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.820466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.820492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.820686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.820882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.820908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.821069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.821269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.821294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.821482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.821669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.821695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.821881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.822094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.822119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.822313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.822596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.822621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.822815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.822998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.823023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.823184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.823369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.823395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.823576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.823786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.823812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.824005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.824187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.824212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.824375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.824582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.824606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.824881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.825094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.825119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.825329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.825492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.825518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.825733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.826000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.826025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.826182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.826367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.826393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.826606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.826778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.826804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.826988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.827197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.827222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.827415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.827603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.827645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.827866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.828075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.828100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.828284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.828493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.828518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.828707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.828897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.828923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.829112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.829379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.829405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.829595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.829780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.829806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.226 [2024-04-24 21:35:25.829979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.830187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.226 [2024-04-24 21:35:25.830212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.226 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.830398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.830553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.830578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.830772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.830935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.830960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.831178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.831332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.831357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.831544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.831701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.831726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.831935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.832124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.832149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.832336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.832519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.832544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.832731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.832924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.832950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.833152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.833366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.833391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.833576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.833733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.833759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.833952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.834137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.834162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.834354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.834540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.834566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.834754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.834949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.834975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.835161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.835347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.835372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.835561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.835751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.835777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.835966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.836149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.836176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.836359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.836575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.836601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.836799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.837010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.837035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.837223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.837433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.837458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.837614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.837806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.837833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.837998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.838192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.838217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.838406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.838589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.838614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.838805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.838975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.839000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.839180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.839335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.839360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.839543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.839734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.839762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.839950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.840109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.840135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.840291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.840501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.840526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.840723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.840909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.840936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.841095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.841259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.841284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.841494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.841683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.841711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.841933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.842134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.842159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.842353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.842511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.842536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.842701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.842884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.842913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.843132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.843343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.843368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.843559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.843723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.843750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.843939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.844099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.844124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.844314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.844526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.844551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.844816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.845004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.845029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.845214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.845377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.845402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.845611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.845808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.227 [2024-04-24 21:35:25.845834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.227 qpair failed and we were unable to recover it. 00:21:00.227 [2024-04-24 21:35:25.846016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.846200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.846225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.846383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.846579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.846605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.846797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.846953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.846982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.847145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.847332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.847357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.847569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.847759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.847786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.847948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.848146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.848172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.848359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.848544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.848570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.848757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.848952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.848977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.849144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.849333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.849359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.849566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.849754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.849780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.849986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.850148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.850172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.850383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.850567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.850593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.850810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.851074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.851103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.851288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.851476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.851502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.851722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.851890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.851915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.852128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.852284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.852309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.852499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.852687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.852713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.852872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.853035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.853060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.853224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.853435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.853460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.853651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.853852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.853877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.854160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.854344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.854370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.854637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.854851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.854876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.855085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.855274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.855304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.855498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.855687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.855713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.855917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.856081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.856106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.856318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.856506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.856532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.856743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.856923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.856948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.857109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.857276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.857301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.857487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.857704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.857730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.857916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.858103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.858129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.858346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.858544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.858569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.858735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.858947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.858972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.859161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.859347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.859373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.859544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.859760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.859787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.859976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.860147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.860172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.860355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.860542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.860569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.860758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.860944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.860971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.861186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.861374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.861400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.861600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.861871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.861896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.862080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.862265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.862290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.862506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.862695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.862722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.228 qpair failed and we were unable to recover it. 00:21:00.228 [2024-04-24 21:35:25.862909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.228 [2024-04-24 21:35:25.863096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.863122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.229 qpair failed and we were unable to recover it. 00:21:00.229 [2024-04-24 21:35:25.863309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.863499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.863524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.229 qpair failed and we were unable to recover it. 00:21:00.229 [2024-04-24 21:35:25.863678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.863868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.863894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.229 qpair failed and we were unable to recover it. 00:21:00.229 [2024-04-24 21:35:25.864079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.864243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.864270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.229 qpair failed and we were unable to recover it. 00:21:00.229 [2024-04-24 21:35:25.864429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.864620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.864652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.229 qpair failed and we were unable to recover it. 00:21:00.229 [2024-04-24 21:35:25.864809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.865001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.865026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.229 qpair failed and we were unable to recover it. 00:21:00.229 [2024-04-24 21:35:25.865215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.865451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.865477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.229 qpair failed and we were unable to recover it. 00:21:00.229 [2024-04-24 21:35:25.865645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.865870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.865896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.229 qpair failed and we were unable to recover it. 00:21:00.229 [2024-04-24 21:35:25.866088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.866269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.866294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.229 qpair failed and we were unable to recover it. 00:21:00.229 [2024-04-24 21:35:25.866509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.866689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.866715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.229 qpair failed and we were unable to recover it. 00:21:00.229 [2024-04-24 21:35:25.866983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.867148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.867173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.229 qpair failed and we were unable to recover it. 00:21:00.229 [2024-04-24 21:35:25.867370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.867564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.867590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.229 qpair failed and we were unable to recover it. 00:21:00.229 [2024-04-24 21:35:25.867770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.867980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.868005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.229 qpair failed and we were unable to recover it. 00:21:00.229 [2024-04-24 21:35:25.868195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.868379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.868406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.229 qpair failed and we were unable to recover it. 00:21:00.229 [2024-04-24 21:35:25.868593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.868792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.229 [2024-04-24 21:35:25.868819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.229 qpair failed and we were unable to recover it. 00:21:00.504 [2024-04-24 21:35:25.868990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.504 [2024-04-24 21:35:25.869185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.504 [2024-04-24 21:35:25.869212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.504 qpair failed and we were unable to recover it. 00:21:00.504 [2024-04-24 21:35:25.869402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.504 [2024-04-24 21:35:25.869596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.504 [2024-04-24 21:35:25.869642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.504 qpair failed and we were unable to recover it. 00:21:00.504 [2024-04-24 21:35:25.869872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.504 [2024-04-24 21:35:25.870045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.504 [2024-04-24 21:35:25.870072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.504 qpair failed and we were unable to recover it. 00:21:00.504 [2024-04-24 21:35:25.870284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.504 [2024-04-24 21:35:25.870471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.504 [2024-04-24 21:35:25.870496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.504 qpair failed and we were unable to recover it. 00:21:00.504 [2024-04-24 21:35:25.870660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.504 [2024-04-24 21:35:25.870852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.504 [2024-04-24 21:35:25.870878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.504 qpair failed and we were unable to recover it. 00:21:00.504 [2024-04-24 21:35:25.871052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.504 [2024-04-24 21:35:25.871208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.504 [2024-04-24 21:35:25.871234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.504 qpair failed and we were unable to recover it. 00:21:00.504 [2024-04-24 21:35:25.871424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.504 [2024-04-24 21:35:25.871634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.504 [2024-04-24 21:35:25.871660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.504 qpair failed and we were unable to recover it. 00:21:00.504 [2024-04-24 21:35:25.871834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.504 [2024-04-24 21:35:25.872045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.504 [2024-04-24 21:35:25.872072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.504 qpair failed and we were unable to recover it. 00:21:00.504 [2024-04-24 21:35:25.872231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.504 [2024-04-24 21:35:25.872395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.504 [2024-04-24 21:35:25.872421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.504 qpair failed and we were unable to recover it. 00:21:00.504 [2024-04-24 21:35:25.872638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.504 [2024-04-24 21:35:25.872799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.504 [2024-04-24 21:35:25.872824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.504 qpair failed and we were unable to recover it. 00:21:00.504 [2024-04-24 21:35:25.872984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.504 [2024-04-24 21:35:25.873145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.873170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.873382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.873544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.873569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.873734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.873933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.873958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.874149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.874334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.874359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.874543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.874722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.874749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.874936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.875118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.875144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.875366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.875549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.875574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.875763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.875970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.875995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.876203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.876390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.876415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.876600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.876814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.876840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.877004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.877193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.877219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.877375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.877538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.877565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.877759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.877959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.877984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.878171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.878328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.878353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.878541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.878754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.878780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.878963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.879123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.879148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.879340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.879499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.879525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.879742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.879933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.879958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.880120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.880275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.880300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.880459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.880623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.880653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.880865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.881082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.881107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.881266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.881457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.881482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.881674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.881837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.881863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.882024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.882180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.882207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.882393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.882601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.882626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.882816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.883032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.883057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.883247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.883430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.883455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.883621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.883797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.883823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.884020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.884203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.505 [2024-04-24 21:35:25.884229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.505 qpair failed and we were unable to recover it. 00:21:00.505 [2024-04-24 21:35:25.884417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.884609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.884655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.884846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.885031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.885056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.885246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.885431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.885456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.885640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.885825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.885851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.886038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.886199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.886224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.886416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.886577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.886602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.886792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.886976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.887001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.887165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.887430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.887455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.887649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.887838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.887863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.888073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.888264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.888289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.888501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.888684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.888710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.888891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.889078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.889103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.889313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.889532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.889558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.889745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.890007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.890032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.890249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.890433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.890459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.890673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.890858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.890883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.891047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.891237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.891261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.891475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.891644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.891687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.891873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.892058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.892083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.892264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.892425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.892450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.892611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.892805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.892831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.893023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.893215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.893240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.893425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.893614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.893647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.893848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.894010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.894034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.894195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.894402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.894426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.894581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.894792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.894819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.894972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.895132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.895157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.895340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.895496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.895520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.895684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.895872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.506 [2024-04-24 21:35:25.895897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.506 qpair failed and we were unable to recover it. 00:21:00.506 [2024-04-24 21:35:25.896098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.896287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.896311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.896477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.896665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.896691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.896901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.897058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.897085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.897273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.897470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.897495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.897674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.897859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.897884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.898061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.898224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.898250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.898404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.898590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.898615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.898813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.899002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.899028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.899180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.899334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.899359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.899610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.899807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.899833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.899992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.900189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.900214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.900397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.900575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.900601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.900803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.900990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.901015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.901200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.901387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.901412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.901612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.901836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.901862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.902038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.902234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.902259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.902455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.902639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.902665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.902864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.903021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.903047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.903262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.903449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.903475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.903637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.903805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.903835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.904024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.904225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.904250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.904431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.904657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.904684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.904871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.905057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.905082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.905270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.905456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.905481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.905668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.905832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.905857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.906018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.906173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.906199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.906361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.906538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.906562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.507 qpair failed and we were unable to recover it. 00:21:00.507 [2024-04-24 21:35:25.906729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.507 [2024-04-24 21:35:25.906726] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.507 [2024-04-24 21:35:25.906760] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.507 [2024-04-24 21:35:25.906774] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.507 [2024-04-24 21:35:25.906786] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.508 [2024-04-24 21:35:25.906797] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.508 [2024-04-24 21:35:25.906908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.906863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:00.508 [2024-04-24 21:35:25.906932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.906891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:00.508 [2024-04-24 21:35:25.906916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:21:00.508 [2024-04-24 21:35:25.906919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:00.508 [2024-04-24 21:35:25.907123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.907307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.907332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.907522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.907690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.907717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.907908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.908070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.908095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.908252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.908416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.908443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.908640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.908824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.908849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.909040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.909220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.909245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.909437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.909599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.909625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.909836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.910121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.910146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.910336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.910498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.910525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.910720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.910883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.910908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.911095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.911259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.911285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.911584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.911809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.911835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.911995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.912184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.912209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.912395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.912581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.912608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68d4000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.912810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.913115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.913148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.913369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.913571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.913602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.913828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.914056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.914086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.914298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.914505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.914535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.914761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.914962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.914992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.915202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.915418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.915447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.915664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.915835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.915865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.916131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.916368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.916398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.916600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.916814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.916844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.917065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.917289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.917318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.917502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.917672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.917702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.917913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.918105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.918134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:21:00.508 qpair failed and we were unable to recover it. 00:21:00.508 [2024-04-24 21:35:25.918348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.508 [2024-04-24 21:35:25.918553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.918582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:21:00.509 qpair failed and we were unable to recover it. 00:21:00.509 [2024-04-24 21:35:25.918795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.918986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.919016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68dc000b90 with addr=10.0.0.2, port=4420 00:21:00.509 qpair failed and we were unable to recover it. 00:21:00.509 [2024-04-24 21:35:25.919213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.919390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.919418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.509 qpair failed and we were unable to recover it. 00:21:00.509 [2024-04-24 21:35:25.919611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.919792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.919819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.509 qpair failed and we were unable to recover it. 00:21:00.509 [2024-04-24 21:35:25.920014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.920219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.920244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.509 qpair failed and we were unable to recover it. 00:21:00.509 [2024-04-24 21:35:25.920512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.920717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.920743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.509 qpair failed and we were unable to recover it. 00:21:00.509 [2024-04-24 21:35:25.920936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.921126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.921152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.509 qpair failed and we were unable to recover it. 00:21:00.509 [2024-04-24 21:35:25.921433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.921614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.921644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.509 qpair failed and we were unable to recover it. 00:21:00.509 [2024-04-24 21:35:25.921805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.921967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.921991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.509 qpair failed and we were unable to recover it. 00:21:00.509 [2024-04-24 21:35:25.922296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.922457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.922481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.509 qpair failed and we were unable to recover it. 00:21:00.509 [2024-04-24 21:35:25.922666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.922824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.922849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.509 qpair failed and we were unable to recover it. 00:21:00.509 [2024-04-24 21:35:25.923036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.923199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.923224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.509 qpair failed and we were unable to recover it. 00:21:00.509 [2024-04-24 21:35:25.923416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.923580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.923605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.509 qpair failed and we were unable to recover it. 00:21:00.509 [2024-04-24 21:35:25.923783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.923959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.923989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.509 qpair failed and we were unable to recover it. 00:21:00.509 [2024-04-24 21:35:25.924144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.924343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.924367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.509 qpair failed and we were unable to recover it. 00:21:00.509 [2024-04-24 21:35:25.924534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.924722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.924748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.509 qpair failed and we were unable to recover it. 00:21:00.509 [2024-04-24 21:35:25.924908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.925090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.925115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.509 qpair failed and we were unable to recover it. 00:21:00.509 [2024-04-24 21:35:25.925329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.925518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.925542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.509 qpair failed and we were unable to recover it. 00:21:00.509 [2024-04-24 21:35:25.925722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.925876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.925900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.509 qpair failed and we were unable to recover it. 00:21:00.509 [2024-04-24 21:35:25.926057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.926218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.926242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.509 qpair failed and we were unable to recover it. 00:21:00.509 [2024-04-24 21:35:25.926434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.926615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.926645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.509 qpair failed and we were unable to recover it. 00:21:00.509 [2024-04-24 21:35:25.926837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.927000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.927025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.509 qpair failed and we were unable to recover it. 00:21:00.509 [2024-04-24 21:35:25.927217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.927383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.509 [2024-04-24 21:35:25.927409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.927680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.927855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.927885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.928066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.928225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.928249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.928416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.928572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.928596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.928772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.928934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.928958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.929144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.929354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.929378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.929563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.929777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.929801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.929985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.930142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.930167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.930331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.930523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.930550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.930718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.930869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.930894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.931101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.931406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.931431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.931626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.931796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.931822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.932019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.932216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.932241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.932402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.932596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.932620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.932795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.932957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.932981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.933144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.933306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.933330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.933493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.933690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.933716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.933874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.934043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.934067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.934235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.934392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.934417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.934597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.934753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.934778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.934935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.935100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.935124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.935293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.935444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.935473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.935632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.935796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.935820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.935979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.936161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.936186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.936369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.936539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.936562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.936724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.936907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.936931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.937095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.937258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.937282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.937464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.937658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.937684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.937895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.938045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.510 [2024-04-24 21:35:25.938070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.510 qpair failed and we were unable to recover it. 00:21:00.510 [2024-04-24 21:35:25.938248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.938403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.938429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.938618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.938808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.938833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.939025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.939215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.939239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.939396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.939564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.939589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.939768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.939923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.939947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.940126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.940279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.940303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.940486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.940672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.940697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.940880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.941041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.941065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.941250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.941459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.941483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.941677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.941865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.941890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.942068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.942244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.942268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.942466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.942651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.942676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.942838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.943005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.943028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.943194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.943383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.943408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.943610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.943801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.943831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.944120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.944307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.944331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.944494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.944668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.944693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.944853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.945025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.945049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.945236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.945400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.945424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.945598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.945796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.945821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.945979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.946143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.946168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.946350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.946534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.946558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.946742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.946944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.946968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.947145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.947330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.947359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.947519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.947680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.947705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.947870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.948017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.948041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.948195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.948342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.948366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.948527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.948689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.948715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.948897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.949053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.949077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.511 qpair failed and we were unable to recover it. 00:21:00.511 [2024-04-24 21:35:25.949289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.511 [2024-04-24 21:35:25.949449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.949473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.949641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.949797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.949822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.949985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.950134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.950159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.950349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.950552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.950577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.950776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.950976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.951001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.951166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.951343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.951367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.951551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.951837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.951862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.952031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.952248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.952273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.952547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.952733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.952758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.952955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.953138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.953162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.953325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.953487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.953512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.953688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.953851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.953876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.954033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.954226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.954251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.954413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.954568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.954594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.954751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.954914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.954938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.955131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.955293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.955317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.955511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.955691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.955717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.955883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.956069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.956094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.956285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.956448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.956473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.956623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.956787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.956812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.957004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.957182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.957207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.957390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.957588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.957612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.957785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.957945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.957969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.958157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.958304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.958329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.958501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.958673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.958698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.958873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.959059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.959083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.959292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.959463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.959487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.959652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.959841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.959866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.960040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.960209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.960234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.512 qpair failed and we were unable to recover it. 00:21:00.512 [2024-04-24 21:35:25.960397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.512 [2024-04-24 21:35:25.960564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.960589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.960792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.960949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.960974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.961156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.961338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.961363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.961543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.961699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.961724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.961875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.962051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.962076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.962235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.962443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.962468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.962622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.962793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.962818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.963006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.963173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.963197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.963371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.963527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.963551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.963726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.963915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.963940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.964126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.964339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.964364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.964541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.964716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.964742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.964906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.965092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.965116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.965273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.965453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.965477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.965654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.965817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.965842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.966041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.966215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.966240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.966396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.966546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.966575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.966877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.967093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.967118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.967281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.967467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.967491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.967655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.967822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.967847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.968002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.968214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.968238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.968401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.968549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.968574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.968764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.968953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.968977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.969182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.969332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.969360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.513 [2024-04-24 21:35:25.969520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.969678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.513 [2024-04-24 21:35:25.969703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.513 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.969861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.970082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.970107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.970273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.970429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.970453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.970615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.970790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.970817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.970999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.971165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.971188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.971366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.971521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.971546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.971731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.971921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.971945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.972119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.972303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.972328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.972493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.972648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.972674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.972855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.973045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.973069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.973230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.973398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.973422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.973575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.973752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.973777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.973954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.974125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.974150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.974310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.974503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.974526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.974689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.974864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.974888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.975040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.975197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.975227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.975414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.975606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.975636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.975806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.975961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.975985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.976182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.976340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.976370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.976558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.976741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.976766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.976933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.977120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.977144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.977304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.977458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.977484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.977641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.977817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.977841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.978007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.978284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.978308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.978580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.978782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.978807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.978992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.979145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.979170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.979356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.979534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.979559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.979712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.979890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.979914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.980076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.980261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.980286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.980470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.980649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.514 [2024-04-24 21:35:25.980683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.514 qpair failed and we were unable to recover it. 00:21:00.514 [2024-04-24 21:35:25.980847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.981029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.981053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.981243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.981402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.981426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.981603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.981784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.981809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.981970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.982154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.982179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.982334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.982533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.982558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.982741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.982925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.982950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.983111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.983299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.983324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.983479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.983659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.983684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.983847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.984006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.984031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.984198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.984360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.984384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.984579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.984744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.984770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.984933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.985084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.985107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.985282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.985431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.985455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.985677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.985828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.985857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.986048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.986209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.986233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.986401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.986558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.986591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.986786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.986995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.987019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.987179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.987334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.987358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.987510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.987708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.987734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.987889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.988086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.988110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.988269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.988451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.988475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.988670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.988822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.988846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.989026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.989208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.989242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.989403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.989562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.989586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.989779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.989942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.989967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.990154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.990312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.990335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.990525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.990718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.990744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.990909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.991070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.991093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.991244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.991404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.515 [2024-04-24 21:35:25.991430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.515 qpair failed and we were unable to recover it. 00:21:00.515 [2024-04-24 21:35:25.991585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.991750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.991775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:25.991930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.992079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.992103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:25.992294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.992477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.992502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:25.992673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.992893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.992917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:25.993107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.993271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.993296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:25.993482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.993658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.993683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:25.993843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.993994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.994018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:25.994178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.994345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.994370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:25.994546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.994737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.994762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:25.994938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.995106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.995131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:25.995286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.995458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.995483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:25.995692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.995845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.995869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:25.996063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.996221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.996246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:25.996433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.996604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.996635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:25.996792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.996980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.997005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:25.997167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.997354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.997377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:25.997560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.997754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.997779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:25.997969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.998121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.998145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:25.998294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.998480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.998505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:25.998719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.998902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.998925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:25.999112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.999290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.999315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:25.999472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.999632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:25.999656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:25.999840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:26.000032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:26.000057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:26.000241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:26.000430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:26.000454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:26.000624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:26.000793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:26.000819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:26.001000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:26.001159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:26.001183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:26.001364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:26.001527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:26.001551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:26.001726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:26.001880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:26.001903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:26.002058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:26.002240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:26.002264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.516 qpair failed and we were unable to recover it. 00:21:00.516 [2024-04-24 21:35:26.002452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.516 [2024-04-24 21:35:26.002635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.002661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.002814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.002990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.003014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.003197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.003375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.003400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.003554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.003724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.003748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.003936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.004090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.004114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.004299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.004499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.004524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.004675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.004872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.004900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.005082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.005267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.005291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.005475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.005636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.005661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.005818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.006032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.006057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.006204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.006392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.006416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.006626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.006811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.006835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.007005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.007183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.007208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.007387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.007545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.007568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.007753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.007933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.007958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.008143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.008319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.008344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.008528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.008712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.008741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.008906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.009090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.009114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.009316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.009475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.009500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.009695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.009857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.009882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.010063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.010230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.010255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.010431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.010622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.010651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.010831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.010980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.011005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.011174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.011378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.011402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.011588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.011758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.011783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.011949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.012105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.012128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.012280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.012463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.012488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.012672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.012828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.012853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.517 [2024-04-24 21:35:26.013024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.013196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.517 [2024-04-24 21:35:26.013220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.517 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.013380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.013558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.013581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.013765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.013945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.013970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.014141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.014327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.014351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.014560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.014729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.014754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.014938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.015095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.015119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.015306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.015497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.015522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.015701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.015883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.015908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.016057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.016262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.016287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.016473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.016635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.016661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.016818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.016971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.016996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.017179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.017355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.017379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.017580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.017778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.017804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.017958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.018111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.018135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.018290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.018474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.018498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.018658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.018822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.018847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.019027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.019189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.019213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.019394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.019574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.019598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.019776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.019958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.019983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.020147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.020305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.020330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.020505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.020719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.020744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.020899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.021063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.021089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.021275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.021425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.021449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.021633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.021827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.021852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.518 qpair failed and we were unable to recover it. 00:21:00.518 [2024-04-24 21:35:26.022014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.518 [2024-04-24 21:35:26.022172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.022196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.022380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.022590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.022616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.022779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.022959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.022983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.023190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.023351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.023375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.023586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.023747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.023772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.023927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.024142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.024166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.024316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.024501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.024526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.024692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.024851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.024875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.025064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.025239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.025263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.025412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.025597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.025622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.025817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.026002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.026026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.026209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.026388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.026413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.026591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.026789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.026814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.026973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.027124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.027148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.027332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.027494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.027519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.027672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.027820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.027849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.028014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.028167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.028191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.028357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.028537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.028562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.028745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.028903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.028927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.029112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.029264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.029288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.029471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.029666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.029691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.029908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.030088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.030113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.030289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.030449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.030473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.030686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.030859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.030883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.031064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.031223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.031248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.031454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.031640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.031665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.031854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.032006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.032030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.032219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.032378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.032403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.032564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.032784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.032810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.519 qpair failed and we were unable to recover it. 00:21:00.519 [2024-04-24 21:35:26.032967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.519 [2024-04-24 21:35:26.033155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.033180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.033385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.033557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.033581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.033751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.033957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.033982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.034139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.034308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.034331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.034491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.034653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.034678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.034858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.035026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.035050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.035238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.035387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.035411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.035600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.035776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.035801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.035953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.036105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.036129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.036315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.036504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.036530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.036684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.036866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.036891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.037063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.037223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.037247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.037402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.037560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.037585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.037745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.037929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.037953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.038109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.038256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.038281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.038466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.038618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.038648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.038807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.038980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.039003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.039179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.039366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.039391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.039541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 21:35:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:00.520 21:35:26 -- common/autotest_common.sh@850 -- # return 0 00:21:00.520 [2024-04-24 21:35:26.039723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.039747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 21:35:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:00.520 [2024-04-24 21:35:26.039941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 21:35:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:00.520 21:35:26 -- common/autotest_common.sh@10 -- # set +x 00:21:00.520 [2024-04-24 21:35:26.040150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.040175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.040335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.040489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.040513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.040729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.040886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.040911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.041087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.041249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.041273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.041427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.041609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.041651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.041811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.041962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.041987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.042155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.042334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.042358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.042540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.042697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.042727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.042922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.043101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.043125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.520 qpair failed and we were unable to recover it. 00:21:00.520 [2024-04-24 21:35:26.043307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.043456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.520 [2024-04-24 21:35:26.043481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.043632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.043815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.043839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.043993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.044168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.044193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.044358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.044532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.044556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.044734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.044920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.044943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.045136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.045321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.045358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.045562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.045729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.045754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.045936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.046109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.046134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.046318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.046486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.046511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.046678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.046839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.046864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.047081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.047229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.047253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.047423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.047600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.047625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.047819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.047995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.048019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.048197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.048379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.048413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.048605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.048798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.048823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.049035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.049190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.049216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.049430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.049581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.049605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.049816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.049974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.049999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.050155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.050339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.050363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.050545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.050729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.050755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.050908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.051106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.051131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.051287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.051463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.051488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.051641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.051826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.051851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.052064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.052224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.052248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.052425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.052647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.052672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.052858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.053040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.053064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.053256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.053440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.053467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.053637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.053797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.053822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.053982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.054182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.054207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.521 [2024-04-24 21:35:26.054387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.054547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.521 [2024-04-24 21:35:26.054572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.521 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.054746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.054900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.054935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.055082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.055267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.055293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.055464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.055648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.055674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.055866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.056054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.056078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.056280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.056489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.056513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.056700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.056851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.056877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.057048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.057202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.057226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 21:35:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.522 [2024-04-24 21:35:26.057376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 21:35:26 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:00.522 [2024-04-24 21:35:26.057566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.057592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 21:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.522 21:35:26 -- common/autotest_common.sh@10 -- # set +x 00:21:00.522 [2024-04-24 21:35:26.057780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.057949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.057977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.058166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.058311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.058335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.058486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.058686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.058711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.058909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.059069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.059093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.059281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.059430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.059454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.059613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.059776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.059800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.059955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.060152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.060177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.060354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.060509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.060534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.060741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.060902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.060928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.061114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.061291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.061316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.061497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.061689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.061714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.061923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.062089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.062114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.062297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.062477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.062501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.062670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.062817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.062842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.062995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.063198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.063223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.063406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.063595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.063620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.063805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.063980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.064005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.064194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.064376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.064400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.522 qpair failed and we were unable to recover it. 00:21:00.522 [2024-04-24 21:35:26.064592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.522 [2024-04-24 21:35:26.064764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.064789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.064943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.065144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.065169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.065349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.065561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.065585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.065884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.066077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.066102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.066263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.066420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.066444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.066601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.066794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.066821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.066990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.067144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.067168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.067331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.067517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.067542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.067729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.067881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.067906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.068067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.068250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.068274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.068425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.068606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.068638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.068807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.068985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.069009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.069163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.069353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.069378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.069562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.069870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.069895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.070108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.070290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.070314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.070494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.070655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.070681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.070850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.071028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.071053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.071209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.071385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.071410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.071565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.071722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.071747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.071940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.072124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.072148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.072307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.072587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.072611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.072795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.072953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.072982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.073144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.073292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.073317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.073524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.073692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.073718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.073877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.074071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.074096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.074263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.074421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.523 [2024-04-24 21:35:26.074445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.523 qpair failed and we were unable to recover it. 00:21:00.523 [2024-04-24 21:35:26.074638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.074878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.074903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.075088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.075281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.075316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.075515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.075715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.075740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.075891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.076047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.076071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.076238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.076421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.076446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.076638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.076924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.076948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.077173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.077334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.077358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.077570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.077766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.077796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.077965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.078157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.078181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.078384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.078537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.078562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.078738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.078897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.078922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.079095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.079245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.079270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.079427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.079580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.079604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.079784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.079943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.079967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.080171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.080356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.080381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.080539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.080759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.080785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 witMalloc0 00:21:00.524 h addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.080967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.081135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.081160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.081320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 21:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.524 [2024-04-24 21:35:26.081482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 21:35:26 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:00.524 [2024-04-24 21:35:26.081511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 21:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.524 [2024-04-24 21:35:26.081684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 21:35:26 -- common/autotest_common.sh@10 -- # set +x 00:21:00.524 [2024-04-24 21:35:26.081830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.081855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.082052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.082219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.082243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.082390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.082549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.082573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.082773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.082928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.082952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.083111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.083298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.083323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.083469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.083624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.083654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.083817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.083987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.084012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.084201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.084359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.084382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.084464] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.524 [2024-04-24 21:35:26.084562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.084737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.084763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.524 qpair failed and we were unable to recover it. 00:21:00.524 [2024-04-24 21:35:26.084978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.085140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.524 [2024-04-24 21:35:26.085164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.085346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.085554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.085579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.085744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.085964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.085988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.086147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.086339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.086364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.086537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.086721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.086745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.086911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.087100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.087125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.087311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.087473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.087498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.087694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.087895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.087920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.088083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.088269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.088293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.088477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.088679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.088704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.088917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.089072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.089096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.089255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.089414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.089439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.089593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.089766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.089790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.090004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.090160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.090186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.090343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.090547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.090571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.090744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.090900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.090935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.091116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.091304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.091327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.091477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.091636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.091661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.091841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.091993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.092019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.092205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.092368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.092392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.092541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.092726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 21:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.525 [2024-04-24 21:35:26.092752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 21:35:26 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:00.525 [2024-04-24 21:35:26.092934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 21:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.525 21:35:26 -- common/autotest_common.sh@10 -- # set +x 00:21:00.525 [2024-04-24 21:35:26.093143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.093167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.093327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.093508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.093532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.093682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.093857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.093881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.094074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.094232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.094256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.094440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.094640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.094665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.094818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.094975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.094999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.095152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.095300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.095324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.095510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.095685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.525 [2024-04-24 21:35:26.095710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.525 qpair failed and we were unable to recover it. 00:21:00.525 [2024-04-24 21:35:26.095891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.096050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.096076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.096265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.096443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.096467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.096659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.096817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.096841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.097024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.097207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.097231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.097413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.097596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.097625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.097817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.097999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.098023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.098231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.098401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.098426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.098604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.098796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.098822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.099008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.099158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.099183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.099341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.099496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.099522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.099714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.099914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.099939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.100135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.100303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.100328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.100482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.100666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.100691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 21:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.526 [2024-04-24 21:35:26.100850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 21:35:26 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:00.526 21:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.526 [2024-04-24 21:35:26.101069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.101094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 21:35:26 -- common/autotest_common.sh@10 -- # set +x 00:21:00.526 [2024-04-24 21:35:26.101267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.101455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.101479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.101662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.101834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.101858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.102036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.102227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.102252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.102404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.102565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.102589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.102748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.102937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.102962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.103140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.103293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.103319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.103505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.103688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.103714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.103895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.104046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.104071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.104218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.104368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.104392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.104575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.104739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.104765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.104969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.105145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.105171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.105375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.105528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.105552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.105738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.105893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.105918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.106100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.106281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.526 [2024-04-24 21:35:26.106305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.526 qpair failed and we were unable to recover it. 00:21:00.526 [2024-04-24 21:35:26.106481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.106694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.106719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.527 qpair failed and we were unable to recover it. 00:21:00.527 [2024-04-24 21:35:26.106895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.107093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.107116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.527 qpair failed and we were unable to recover it. 00:21:00.527 [2024-04-24 21:35:26.107299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.107458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.107487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.527 qpair failed and we were unable to recover it. 00:21:00.527 [2024-04-24 21:35:26.107654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.107831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.107856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.527 qpair failed and we were unable to recover it. 00:21:00.527 [2024-04-24 21:35:26.108041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.108188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.108211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.527 qpair failed and we were unable to recover it. 00:21:00.527 [2024-04-24 21:35:26.108394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.108569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.108594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.527 qpair failed and we were unable to recover it. 00:21:00.527 21:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.527 [2024-04-24 21:35:26.108758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 21:35:26 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:00.527 [2024-04-24 21:35:26.108972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 21:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.527 [2024-04-24 21:35:26.108997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.527 qpair failed and we were unable to recover it. 00:21:00.527 21:35:26 -- common/autotest_common.sh@10 -- # set +x 00:21:00.527 [2024-04-24 21:35:26.109154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.109308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.109333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.527 qpair failed and we were unable to recover it. 00:21:00.527 [2024-04-24 21:35:26.109517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.109701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.109725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.527 qpair failed and we were unable to recover it. 00:21:00.527 [2024-04-24 21:35:26.109881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.110043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.110068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.527 qpair failed and we were unable to recover it. 00:21:00.527 [2024-04-24 21:35:26.110226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.110381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.110405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.527 qpair failed and we were unable to recover it. 00:21:00.527 [2024-04-24 21:35:26.110560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.110732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.110756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.527 qpair failed and we were unable to recover it. 00:21:00.527 [2024-04-24 21:35:26.110940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.111142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.111166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.527 qpair failed and we were unable to recover it. 00:21:00.527 [2024-04-24 21:35:26.111313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.111489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.111514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.527 qpair failed and we were unable to recover it. 00:21:00.527 [2024-04-24 21:35:26.111672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.111861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.111885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.527 qpair failed and we were unable to recover it. 00:21:00.527 [2024-04-24 21:35:26.112066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.112241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.112265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.527 qpair failed and we were unable to recover it. 00:21:00.527 [2024-04-24 21:35:26.112451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.112599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.527 [2024-04-24 21:35:26.112623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1beef30 with addr=10.0.0.2, port=4420 00:21:00.527 qpair failed and we were unable to recover it. 00:21:00.527 [2024-04-24 21:35:26.112790] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.527 [2024-04-24 21:35:26.115228] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.527 [2024-04-24 21:35:26.115439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.527 [2024-04-24 21:35:26.115467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.527 [2024-04-24 21:35:26.115482] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.527 [2024-04-24 21:35:26.115495] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.527 [2024-04-24 21:35:26.115528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.527 qpair failed and we were unable to recover it. 00:21:00.527 21:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.527 21:35:26 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:00.527 21:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.527 21:35:26 -- common/autotest_common.sh@10 -- # set +x 00:21:00.527 21:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.527 21:35:26 -- host/target_disconnect.sh@58 -- # wait 2681280 00:21:00.527 [2024-04-24 21:35:26.125096] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.527 [2024-04-24 21:35:26.125291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.527 [2024-04-24 21:35:26.125319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.527 [2024-04-24 21:35:26.125334] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.527 [2024-04-24 21:35:26.125357] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.527 [2024-04-24 21:35:26.125386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.527 qpair failed and we were unable to recover it. 00:21:00.527 [2024-04-24 21:35:26.135133] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.527 [2024-04-24 21:35:26.135325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.528 [2024-04-24 21:35:26.135353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.528 [2024-04-24 21:35:26.135368] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.528 [2024-04-24 21:35:26.135384] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.528 [2024-04-24 21:35:26.135413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.528 qpair failed and we were unable to recover it. 00:21:00.528 [2024-04-24 21:35:26.145155] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.528 [2024-04-24 21:35:26.145325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.528 [2024-04-24 21:35:26.145351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.528 [2024-04-24 21:35:26.145366] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.528 [2024-04-24 21:35:26.145378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.528 [2024-04-24 21:35:26.145406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.528 qpair failed and we were unable to recover it. 00:21:00.528 [2024-04-24 21:35:26.155112] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.528 [2024-04-24 21:35:26.155286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.528 [2024-04-24 21:35:26.155313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.528 [2024-04-24 21:35:26.155328] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.528 [2024-04-24 21:35:26.155339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.528 [2024-04-24 21:35:26.155367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.528 qpair failed and we were unable to recover it. 00:21:00.528 [2024-04-24 21:35:26.165124] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.528 [2024-04-24 21:35:26.165289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.528 [2024-04-24 21:35:26.165315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.528 [2024-04-24 21:35:26.165330] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.528 [2024-04-24 21:35:26.165342] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.528 [2024-04-24 21:35:26.165370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.528 qpair failed and we were unable to recover it. 00:21:00.788 [2024-04-24 21:35:26.175110] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.788 [2024-04-24 21:35:26.175274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.788 [2024-04-24 21:35:26.175300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.788 [2024-04-24 21:35:26.175315] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.788 [2024-04-24 21:35:26.175327] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.788 [2024-04-24 21:35:26.175354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.788 qpair failed and we were unable to recover it. 00:21:00.788 [2024-04-24 21:35:26.185127] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.788 [2024-04-24 21:35:26.185312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.788 [2024-04-24 21:35:26.185339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.788 [2024-04-24 21:35:26.185354] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.788 [2024-04-24 21:35:26.185367] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.788 [2024-04-24 21:35:26.185395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.788 qpair failed and we were unable to recover it. 00:21:00.788 [2024-04-24 21:35:26.195242] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.788 [2024-04-24 21:35:26.195404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.789 [2024-04-24 21:35:26.195430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.789 [2024-04-24 21:35:26.195445] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.789 [2024-04-24 21:35:26.195458] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.789 [2024-04-24 21:35:26.195485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.789 qpair failed and we were unable to recover it. 00:21:00.789 [2024-04-24 21:35:26.205169] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.789 [2024-04-24 21:35:26.205323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.789 [2024-04-24 21:35:26.205349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.789 [2024-04-24 21:35:26.205363] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.789 [2024-04-24 21:35:26.205376] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.789 [2024-04-24 21:35:26.205403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.789 qpair failed and we were unable to recover it. 00:21:00.789 [2024-04-24 21:35:26.215183] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.789 [2024-04-24 21:35:26.215342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.789 [2024-04-24 21:35:26.215368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.789 [2024-04-24 21:35:26.215382] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.789 [2024-04-24 21:35:26.215400] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.789 [2024-04-24 21:35:26.215427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.789 qpair failed and we were unable to recover it. 00:21:00.789 [2024-04-24 21:35:26.225229] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.789 [2024-04-24 21:35:26.225396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.789 [2024-04-24 21:35:26.225422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.789 [2024-04-24 21:35:26.225437] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.789 [2024-04-24 21:35:26.225449] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.789 [2024-04-24 21:35:26.225477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.789 qpair failed and we were unable to recover it. 00:21:00.789 [2024-04-24 21:35:26.235375] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.789 [2024-04-24 21:35:26.235542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.789 [2024-04-24 21:35:26.235567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.789 [2024-04-24 21:35:26.235582] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.789 [2024-04-24 21:35:26.235594] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.789 [2024-04-24 21:35:26.235622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.789 qpair failed and we were unable to recover it. 00:21:00.789 [2024-04-24 21:35:26.245321] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.789 [2024-04-24 21:35:26.245478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.789 [2024-04-24 21:35:26.245503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.789 [2024-04-24 21:35:26.245517] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.789 [2024-04-24 21:35:26.245529] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.789 [2024-04-24 21:35:26.245556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.789 qpair failed and we were unable to recover it. 00:21:00.789 [2024-04-24 21:35:26.255328] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.789 [2024-04-24 21:35:26.255485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.789 [2024-04-24 21:35:26.255511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.789 [2024-04-24 21:35:26.255526] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.789 [2024-04-24 21:35:26.255538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.789 [2024-04-24 21:35:26.255565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.789 qpair failed and we were unable to recover it. 00:21:00.789 [2024-04-24 21:35:26.265384] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.789 [2024-04-24 21:35:26.265590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.789 [2024-04-24 21:35:26.265616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.789 [2024-04-24 21:35:26.265639] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.789 [2024-04-24 21:35:26.265654] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.789 [2024-04-24 21:35:26.265682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.789 qpair failed and we were unable to recover it. 00:21:00.789 [2024-04-24 21:35:26.275393] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.789 [2024-04-24 21:35:26.275591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.789 [2024-04-24 21:35:26.275619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.789 [2024-04-24 21:35:26.275643] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.789 [2024-04-24 21:35:26.275657] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.789 [2024-04-24 21:35:26.275686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.789 qpair failed and we were unable to recover it. 00:21:00.789 [2024-04-24 21:35:26.285421] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.789 [2024-04-24 21:35:26.285584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.789 [2024-04-24 21:35:26.285611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.789 [2024-04-24 21:35:26.285626] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.789 [2024-04-24 21:35:26.285647] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.789 [2024-04-24 21:35:26.285675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.789 qpair failed and we were unable to recover it. 00:21:00.789 [2024-04-24 21:35:26.295448] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.789 [2024-04-24 21:35:26.295599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.789 [2024-04-24 21:35:26.295625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.789 [2024-04-24 21:35:26.295646] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.789 [2024-04-24 21:35:26.295658] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.789 [2024-04-24 21:35:26.295686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.789 qpair failed and we were unable to recover it. 00:21:00.789 [2024-04-24 21:35:26.305469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.789 [2024-04-24 21:35:26.305640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.789 [2024-04-24 21:35:26.305666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.789 [2024-04-24 21:35:26.305681] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.789 [2024-04-24 21:35:26.305698] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.789 [2024-04-24 21:35:26.305727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.789 qpair failed and we were unable to recover it. 00:21:00.789 [2024-04-24 21:35:26.315534] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.789 [2024-04-24 21:35:26.315715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.789 [2024-04-24 21:35:26.315741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.789 [2024-04-24 21:35:26.315756] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.789 [2024-04-24 21:35:26.315768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.789 [2024-04-24 21:35:26.315796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.789 qpair failed and we were unable to recover it. 00:21:00.789 [2024-04-24 21:35:26.325543] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.789 [2024-04-24 21:35:26.325701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.789 [2024-04-24 21:35:26.325727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.789 [2024-04-24 21:35:26.325742] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.790 [2024-04-24 21:35:26.325754] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.790 [2024-04-24 21:35:26.325781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.790 qpair failed and we were unable to recover it. 00:21:00.790 [2024-04-24 21:35:26.335585] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.790 [2024-04-24 21:35:26.335744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.790 [2024-04-24 21:35:26.335770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.790 [2024-04-24 21:35:26.335784] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.790 [2024-04-24 21:35:26.335796] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.790 [2024-04-24 21:35:26.335824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.790 qpair failed and we were unable to recover it. 00:21:00.790 [2024-04-24 21:35:26.345566] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.790 [2024-04-24 21:35:26.345736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.790 [2024-04-24 21:35:26.345763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.790 [2024-04-24 21:35:26.345777] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.790 [2024-04-24 21:35:26.345789] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.790 [2024-04-24 21:35:26.345817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.790 qpair failed and we were unable to recover it. 00:21:00.790 [2024-04-24 21:35:26.355647] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.790 [2024-04-24 21:35:26.355808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.790 [2024-04-24 21:35:26.355834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.790 [2024-04-24 21:35:26.355849] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.790 [2024-04-24 21:35:26.355861] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.790 [2024-04-24 21:35:26.355888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.790 qpair failed and we were unable to recover it. 00:21:00.790 [2024-04-24 21:35:26.365676] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.790 [2024-04-24 21:35:26.365834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.790 [2024-04-24 21:35:26.365860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.790 [2024-04-24 21:35:26.365876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.790 [2024-04-24 21:35:26.365887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.790 [2024-04-24 21:35:26.365917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.790 qpair failed and we were unable to recover it. 00:21:00.790 [2024-04-24 21:35:26.375721] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.790 [2024-04-24 21:35:26.375910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.790 [2024-04-24 21:35:26.375935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.790 [2024-04-24 21:35:26.375950] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.790 [2024-04-24 21:35:26.375963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.790 [2024-04-24 21:35:26.375990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.790 qpair failed and we were unable to recover it. 00:21:00.790 [2024-04-24 21:35:26.385709] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.790 [2024-04-24 21:35:26.385869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.790 [2024-04-24 21:35:26.385895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.790 [2024-04-24 21:35:26.385910] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.790 [2024-04-24 21:35:26.385922] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.790 [2024-04-24 21:35:26.385949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.790 qpair failed and we were unable to recover it. 00:21:00.790 [2024-04-24 21:35:26.395754] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.790 [2024-04-24 21:35:26.395916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.790 [2024-04-24 21:35:26.395942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.790 [2024-04-24 21:35:26.395962] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.790 [2024-04-24 21:35:26.395974] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.790 [2024-04-24 21:35:26.396003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.790 qpair failed and we were unable to recover it. 00:21:00.790 [2024-04-24 21:35:26.405802] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.790 [2024-04-24 21:35:26.405987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.790 [2024-04-24 21:35:26.406013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.790 [2024-04-24 21:35:26.406027] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.790 [2024-04-24 21:35:26.406040] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.790 [2024-04-24 21:35:26.406067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.790 qpair failed and we were unable to recover it. 00:21:00.790 [2024-04-24 21:35:26.415793] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.790 [2024-04-24 21:35:26.415952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.790 [2024-04-24 21:35:26.415977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.790 [2024-04-24 21:35:26.415991] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.790 [2024-04-24 21:35:26.416004] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.790 [2024-04-24 21:35:26.416030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.790 qpair failed and we were unable to recover it. 00:21:00.790 [2024-04-24 21:35:26.425811] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.790 [2024-04-24 21:35:26.425975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.790 [2024-04-24 21:35:26.426001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.790 [2024-04-24 21:35:26.426016] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.790 [2024-04-24 21:35:26.426028] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.790 [2024-04-24 21:35:26.426055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.790 qpair failed and we were unable to recover it. 00:21:00.790 [2024-04-24 21:35:26.435867] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.790 [2024-04-24 21:35:26.436040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.790 [2024-04-24 21:35:26.436065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.790 [2024-04-24 21:35:26.436080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.790 [2024-04-24 21:35:26.436092] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.790 [2024-04-24 21:35:26.436119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.790 qpair failed and we were unable to recover it. 00:21:00.790 [2024-04-24 21:35:26.445899] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.790 [2024-04-24 21:35:26.446060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.790 [2024-04-24 21:35:26.446085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.790 [2024-04-24 21:35:26.446100] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.790 [2024-04-24 21:35:26.446112] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.790 [2024-04-24 21:35:26.446140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.790 qpair failed and we were unable to recover it. 00:21:00.790 [2024-04-24 21:35:26.455910] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.790 [2024-04-24 21:35:26.456070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.790 [2024-04-24 21:35:26.456095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.790 [2024-04-24 21:35:26.456110] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.790 [2024-04-24 21:35:26.456121] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:00.790 [2024-04-24 21:35:26.456148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.790 qpair failed and we were unable to recover it. 00:21:01.050 [2024-04-24 21:35:26.465927] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.050 [2024-04-24 21:35:26.466094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.050 [2024-04-24 21:35:26.466119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.050 [2024-04-24 21:35:26.466134] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.050 [2024-04-24 21:35:26.466146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.050 [2024-04-24 21:35:26.466174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.050 qpair failed and we were unable to recover it. 00:21:01.050 [2024-04-24 21:35:26.475950] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.050 [2024-04-24 21:35:26.476130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.050 [2024-04-24 21:35:26.476155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.050 [2024-04-24 21:35:26.476170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.050 [2024-04-24 21:35:26.476182] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.050 [2024-04-24 21:35:26.476210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.050 qpair failed and we were unable to recover it. 00:21:01.050 [2024-04-24 21:35:26.485961] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.050 [2024-04-24 21:35:26.486128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.050 [2024-04-24 21:35:26.486153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.051 [2024-04-24 21:35:26.486173] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.051 [2024-04-24 21:35:26.486186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.051 [2024-04-24 21:35:26.486213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.051 qpair failed and we were unable to recover it. 00:21:01.051 [2024-04-24 21:35:26.495984] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.051 [2024-04-24 21:35:26.496166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.051 [2024-04-24 21:35:26.496191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.051 [2024-04-24 21:35:26.496206] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.051 [2024-04-24 21:35:26.496218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.051 [2024-04-24 21:35:26.496245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.051 qpair failed and we were unable to recover it. 00:21:01.051 [2024-04-24 21:35:26.506061] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.051 [2024-04-24 21:35:26.506226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.051 [2024-04-24 21:35:26.506251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.051 [2024-04-24 21:35:26.506265] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.051 [2024-04-24 21:35:26.506277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.051 [2024-04-24 21:35:26.506304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.051 qpair failed and we were unable to recover it. 00:21:01.051 [2024-04-24 21:35:26.516115] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.051 [2024-04-24 21:35:26.516289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.051 [2024-04-24 21:35:26.516314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.051 [2024-04-24 21:35:26.516329] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.051 [2024-04-24 21:35:26.516341] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.051 [2024-04-24 21:35:26.516368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.051 qpair failed and we were unable to recover it. 00:21:01.051 [2024-04-24 21:35:26.526223] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.051 [2024-04-24 21:35:26.526385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.051 [2024-04-24 21:35:26.526410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.051 [2024-04-24 21:35:26.526425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.051 [2024-04-24 21:35:26.526437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.051 [2024-04-24 21:35:26.526464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.051 qpair failed and we were unable to recover it. 00:21:01.051 [2024-04-24 21:35:26.536136] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.051 [2024-04-24 21:35:26.536322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.051 [2024-04-24 21:35:26.536347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.051 [2024-04-24 21:35:26.536361] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.051 [2024-04-24 21:35:26.536374] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.051 [2024-04-24 21:35:26.536401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.051 qpair failed and we were unable to recover it. 00:21:01.051 [2024-04-24 21:35:26.546152] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.051 [2024-04-24 21:35:26.546338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.051 [2024-04-24 21:35:26.546364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.051 [2024-04-24 21:35:26.546378] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.051 [2024-04-24 21:35:26.546390] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.051 [2024-04-24 21:35:26.546418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.051 qpair failed and we were unable to recover it. 00:21:01.051 [2024-04-24 21:35:26.556181] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.051 [2024-04-24 21:35:26.556354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.051 [2024-04-24 21:35:26.556379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.051 [2024-04-24 21:35:26.556394] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.051 [2024-04-24 21:35:26.556406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.051 [2024-04-24 21:35:26.556434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.051 qpair failed and we were unable to recover it. 00:21:01.051 [2024-04-24 21:35:26.566211] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.051 [2024-04-24 21:35:26.566367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.051 [2024-04-24 21:35:26.566392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.051 [2024-04-24 21:35:26.566407] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.051 [2024-04-24 21:35:26.566419] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.051 [2024-04-24 21:35:26.566447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.051 qpair failed and we were unable to recover it. 00:21:01.051 [2024-04-24 21:35:26.576253] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.051 [2024-04-24 21:35:26.576405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.051 [2024-04-24 21:35:26.576435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.051 [2024-04-24 21:35:26.576450] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.051 [2024-04-24 21:35:26.576462] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.051 [2024-04-24 21:35:26.576490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.051 qpair failed and we were unable to recover it. 00:21:01.051 [2024-04-24 21:35:26.586278] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.051 [2024-04-24 21:35:26.586439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.051 [2024-04-24 21:35:26.586464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.051 [2024-04-24 21:35:26.586479] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.051 [2024-04-24 21:35:26.586491] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.051 [2024-04-24 21:35:26.586518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.051 qpair failed and we were unable to recover it. 00:21:01.051 [2024-04-24 21:35:26.596307] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.051 [2024-04-24 21:35:26.596472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.051 [2024-04-24 21:35:26.596498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.051 [2024-04-24 21:35:26.596513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.051 [2024-04-24 21:35:26.596526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.051 [2024-04-24 21:35:26.596553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.051 qpair failed and we were unable to recover it. 00:21:01.051 [2024-04-24 21:35:26.606346] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.051 [2024-04-24 21:35:26.606507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.051 [2024-04-24 21:35:26.606533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.051 [2024-04-24 21:35:26.606548] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.051 [2024-04-24 21:35:26.606560] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.051 [2024-04-24 21:35:26.606588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.051 qpair failed and we were unable to recover it. 00:21:01.051 [2024-04-24 21:35:26.616336] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.051 [2024-04-24 21:35:26.616491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.051 [2024-04-24 21:35:26.616517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.051 [2024-04-24 21:35:26.616531] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.051 [2024-04-24 21:35:26.616543] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.052 [2024-04-24 21:35:26.616571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.052 qpair failed and we were unable to recover it. 00:21:01.052 [2024-04-24 21:35:26.626393] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.052 [2024-04-24 21:35:26.626561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.052 [2024-04-24 21:35:26.626586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.052 [2024-04-24 21:35:26.626601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.052 [2024-04-24 21:35:26.626613] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.052 [2024-04-24 21:35:26.626647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.052 qpair failed and we were unable to recover it. 00:21:01.052 [2024-04-24 21:35:26.636417] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.052 [2024-04-24 21:35:26.636575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.052 [2024-04-24 21:35:26.636601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.052 [2024-04-24 21:35:26.636615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.052 [2024-04-24 21:35:26.636634] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.052 [2024-04-24 21:35:26.636663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.052 qpair failed and we were unable to recover it. 00:21:01.052 [2024-04-24 21:35:26.646431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.052 [2024-04-24 21:35:26.646594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.052 [2024-04-24 21:35:26.646620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.052 [2024-04-24 21:35:26.646642] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.052 [2024-04-24 21:35:26.646656] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.052 [2024-04-24 21:35:26.646683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.052 qpair failed and we were unable to recover it. 00:21:01.052 [2024-04-24 21:35:26.656463] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.052 [2024-04-24 21:35:26.656637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.052 [2024-04-24 21:35:26.656663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.052 [2024-04-24 21:35:26.656685] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.052 [2024-04-24 21:35:26.656697] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.052 [2024-04-24 21:35:26.656725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.052 qpair failed and we were unable to recover it. 00:21:01.052 [2024-04-24 21:35:26.666505] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.052 [2024-04-24 21:35:26.666685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.052 [2024-04-24 21:35:26.666715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.052 [2024-04-24 21:35:26.666731] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.052 [2024-04-24 21:35:26.666744] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.052 [2024-04-24 21:35:26.666771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.052 qpair failed and we were unable to recover it. 00:21:01.052 [2024-04-24 21:35:26.676510] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.052 [2024-04-24 21:35:26.676693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.052 [2024-04-24 21:35:26.676718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.052 [2024-04-24 21:35:26.676733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.052 [2024-04-24 21:35:26.676745] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.052 [2024-04-24 21:35:26.676773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.052 qpair failed and we were unable to recover it. 00:21:01.052 [2024-04-24 21:35:26.686530] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.052 [2024-04-24 21:35:26.686694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.052 [2024-04-24 21:35:26.686720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.052 [2024-04-24 21:35:26.686735] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.052 [2024-04-24 21:35:26.686747] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.052 [2024-04-24 21:35:26.686775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.052 qpair failed and we were unable to recover it. 00:21:01.052 [2024-04-24 21:35:26.696557] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.052 [2024-04-24 21:35:26.696719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.052 [2024-04-24 21:35:26.696744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.052 [2024-04-24 21:35:26.696759] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.052 [2024-04-24 21:35:26.696771] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.052 [2024-04-24 21:35:26.696798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.052 qpair failed and we were unable to recover it. 00:21:01.052 [2024-04-24 21:35:26.706611] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.052 [2024-04-24 21:35:26.706789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.052 [2024-04-24 21:35:26.706815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.052 [2024-04-24 21:35:26.706829] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.052 [2024-04-24 21:35:26.706841] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.052 [2024-04-24 21:35:26.706874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.052 qpair failed and we were unable to recover it. 00:21:01.052 [2024-04-24 21:35:26.716633] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.052 [2024-04-24 21:35:26.716812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.052 [2024-04-24 21:35:26.716838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.052 [2024-04-24 21:35:26.716853] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.052 [2024-04-24 21:35:26.716865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.052 [2024-04-24 21:35:26.716893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.052 qpair failed and we were unable to recover it. 00:21:01.313 [2024-04-24 21:35:26.726692] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.313 [2024-04-24 21:35:26.726866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.313 [2024-04-24 21:35:26.726890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.313 [2024-04-24 21:35:26.726905] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.313 [2024-04-24 21:35:26.726917] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.313 [2024-04-24 21:35:26.726944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.313 qpair failed and we were unable to recover it. 00:21:01.313 [2024-04-24 21:35:26.736696] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.313 [2024-04-24 21:35:26.736852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.313 [2024-04-24 21:35:26.736877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.313 [2024-04-24 21:35:26.736892] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.313 [2024-04-24 21:35:26.736905] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.313 [2024-04-24 21:35:26.736932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.313 qpair failed and we were unable to recover it. 00:21:01.313 [2024-04-24 21:35:26.746773] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.313 [2024-04-24 21:35:26.746940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.313 [2024-04-24 21:35:26.746966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.313 [2024-04-24 21:35:26.746985] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.313 [2024-04-24 21:35:26.746998] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.313 [2024-04-24 21:35:26.747026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.313 qpair failed and we were unable to recover it. 00:21:01.313 [2024-04-24 21:35:26.756791] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.313 [2024-04-24 21:35:26.756958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.313 [2024-04-24 21:35:26.756991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.313 [2024-04-24 21:35:26.757006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.313 [2024-04-24 21:35:26.757018] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.313 [2024-04-24 21:35:26.757046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.313 qpair failed and we were unable to recover it. 00:21:01.313 [2024-04-24 21:35:26.766768] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.313 [2024-04-24 21:35:26.766930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.313 [2024-04-24 21:35:26.766956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.313 [2024-04-24 21:35:26.766970] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.313 [2024-04-24 21:35:26.766983] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.313 [2024-04-24 21:35:26.767010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.313 qpair failed and we were unable to recover it. 00:21:01.313 [2024-04-24 21:35:26.776814] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.313 [2024-04-24 21:35:26.776994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.313 [2024-04-24 21:35:26.777019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.314 [2024-04-24 21:35:26.777033] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.314 [2024-04-24 21:35:26.777045] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.314 [2024-04-24 21:35:26.777072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.314 qpair failed and we were unable to recover it. 00:21:01.314 [2024-04-24 21:35:26.786880] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.314 [2024-04-24 21:35:26.787046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.314 [2024-04-24 21:35:26.787071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.314 [2024-04-24 21:35:26.787086] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.314 [2024-04-24 21:35:26.787098] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.314 [2024-04-24 21:35:26.787126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.314 qpair failed and we were unable to recover it. 00:21:01.314 [2024-04-24 21:35:26.796869] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.314 [2024-04-24 21:35:26.797043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.314 [2024-04-24 21:35:26.797068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.314 [2024-04-24 21:35:26.797083] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.314 [2024-04-24 21:35:26.797095] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.314 [2024-04-24 21:35:26.797129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.314 qpair failed and we were unable to recover it. 00:21:01.314 [2024-04-24 21:35:26.806931] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.314 [2024-04-24 21:35:26.807088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.314 [2024-04-24 21:35:26.807114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.314 [2024-04-24 21:35:26.807129] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.314 [2024-04-24 21:35:26.807141] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.314 [2024-04-24 21:35:26.807168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.314 qpair failed and we were unable to recover it. 00:21:01.314 [2024-04-24 21:35:26.816910] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.314 [2024-04-24 21:35:26.817066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.314 [2024-04-24 21:35:26.817090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.314 [2024-04-24 21:35:26.817105] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.314 [2024-04-24 21:35:26.817117] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.314 [2024-04-24 21:35:26.817144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.314 qpair failed and we were unable to recover it. 00:21:01.314 [2024-04-24 21:35:26.826957] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.314 [2024-04-24 21:35:26.827154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.314 [2024-04-24 21:35:26.827179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.314 [2024-04-24 21:35:26.827193] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.314 [2024-04-24 21:35:26.827205] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.314 [2024-04-24 21:35:26.827233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.314 qpair failed and we were unable to recover it. 00:21:01.314 [2024-04-24 21:35:26.836983] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.314 [2024-04-24 21:35:26.837156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.314 [2024-04-24 21:35:26.837182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.314 [2024-04-24 21:35:26.837197] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.314 [2024-04-24 21:35:26.837209] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.314 [2024-04-24 21:35:26.837237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.314 qpair failed and we were unable to recover it. 00:21:01.314 [2024-04-24 21:35:26.846997] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.314 [2024-04-24 21:35:26.847158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.314 [2024-04-24 21:35:26.847189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.314 [2024-04-24 21:35:26.847206] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.314 [2024-04-24 21:35:26.847218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.314 [2024-04-24 21:35:26.847246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.314 qpair failed and we were unable to recover it. 00:21:01.314 [2024-04-24 21:35:26.857024] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.314 [2024-04-24 21:35:26.857183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.314 [2024-04-24 21:35:26.857209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.314 [2024-04-24 21:35:26.857223] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.314 [2024-04-24 21:35:26.857236] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.314 [2024-04-24 21:35:26.857263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.314 qpair failed and we were unable to recover it. 00:21:01.314 [2024-04-24 21:35:26.867057] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.314 [2024-04-24 21:35:26.867219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.314 [2024-04-24 21:35:26.867244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.314 [2024-04-24 21:35:26.867258] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.314 [2024-04-24 21:35:26.867270] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.314 [2024-04-24 21:35:26.867297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.314 qpair failed and we were unable to recover it. 00:21:01.314 [2024-04-24 21:35:26.877105] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.314 [2024-04-24 21:35:26.877278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.314 [2024-04-24 21:35:26.877303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.314 [2024-04-24 21:35:26.877318] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.314 [2024-04-24 21:35:26.877330] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.314 [2024-04-24 21:35:26.877357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.314 qpair failed and we were unable to recover it. 00:21:01.314 [2024-04-24 21:35:26.887118] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.314 [2024-04-24 21:35:26.887274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.314 [2024-04-24 21:35:26.887299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.314 [2024-04-24 21:35:26.887314] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.314 [2024-04-24 21:35:26.887332] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.314 [2024-04-24 21:35:26.887360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.314 qpair failed and we were unable to recover it. 00:21:01.314 [2024-04-24 21:35:26.897134] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.314 [2024-04-24 21:35:26.897289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.314 [2024-04-24 21:35:26.897314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.314 [2024-04-24 21:35:26.897329] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.314 [2024-04-24 21:35:26.897341] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.314 [2024-04-24 21:35:26.897369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.314 qpair failed and we were unable to recover it. 00:21:01.314 [2024-04-24 21:35:26.907172] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.314 [2024-04-24 21:35:26.907334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.314 [2024-04-24 21:35:26.907359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.314 [2024-04-24 21:35:26.907374] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.314 [2024-04-24 21:35:26.907386] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.314 [2024-04-24 21:35:26.907413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.314 qpair failed and we were unable to recover it. 00:21:01.315 [2024-04-24 21:35:26.917218] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.315 [2024-04-24 21:35:26.917401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.315 [2024-04-24 21:35:26.917426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.315 [2024-04-24 21:35:26.917441] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.315 [2024-04-24 21:35:26.917453] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.315 [2024-04-24 21:35:26.917480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.315 qpair failed and we were unable to recover it. 00:21:01.315 [2024-04-24 21:35:26.927403] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.315 [2024-04-24 21:35:26.927584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.315 [2024-04-24 21:35:26.927610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.315 [2024-04-24 21:35:26.927625] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.315 [2024-04-24 21:35:26.927644] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.315 [2024-04-24 21:35:26.927673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.315 qpair failed and we were unable to recover it. 00:21:01.315 [2024-04-24 21:35:26.937308] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.315 [2024-04-24 21:35:26.937500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.315 [2024-04-24 21:35:26.937525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.315 [2024-04-24 21:35:26.937540] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.315 [2024-04-24 21:35:26.937552] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.315 [2024-04-24 21:35:26.937579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.315 qpair failed and we were unable to recover it. 00:21:01.315 [2024-04-24 21:35:26.947355] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.315 [2024-04-24 21:35:26.947516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.315 [2024-04-24 21:35:26.947541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.315 [2024-04-24 21:35:26.947556] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.315 [2024-04-24 21:35:26.947567] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.315 [2024-04-24 21:35:26.947595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.315 qpair failed and we were unable to recover it. 00:21:01.315 [2024-04-24 21:35:26.957392] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.315 [2024-04-24 21:35:26.957586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.315 [2024-04-24 21:35:26.957611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.315 [2024-04-24 21:35:26.957626] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.315 [2024-04-24 21:35:26.957653] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.315 [2024-04-24 21:35:26.957682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.315 qpair failed and we were unable to recover it. 00:21:01.315 [2024-04-24 21:35:26.967319] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.315 [2024-04-24 21:35:26.967479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.315 [2024-04-24 21:35:26.967504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.315 [2024-04-24 21:35:26.967519] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.315 [2024-04-24 21:35:26.967531] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.315 [2024-04-24 21:35:26.967559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.315 qpair failed and we were unable to recover it. 00:21:01.315 [2024-04-24 21:35:26.977366] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.315 [2024-04-24 21:35:26.977528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.315 [2024-04-24 21:35:26.977553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.315 [2024-04-24 21:35:26.977568] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.315 [2024-04-24 21:35:26.977585] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.315 [2024-04-24 21:35:26.977613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.315 qpair failed and we were unable to recover it. 00:21:01.315 [2024-04-24 21:35:26.987476] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.315 [2024-04-24 21:35:26.987693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.315 [2024-04-24 21:35:26.987719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.315 [2024-04-24 21:35:26.987733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.315 [2024-04-24 21:35:26.987746] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.315 [2024-04-24 21:35:26.987773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.315 qpair failed and we were unable to recover it. 00:21:01.575 [2024-04-24 21:35:26.997414] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.575 [2024-04-24 21:35:26.997614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.575 [2024-04-24 21:35:26.997652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.575 [2024-04-24 21:35:26.997670] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.575 [2024-04-24 21:35:26.997682] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.575 [2024-04-24 21:35:26.997711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.575 qpair failed and we were unable to recover it. 00:21:01.575 [2024-04-24 21:35:27.007433] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.575 [2024-04-24 21:35:27.007608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.575 [2024-04-24 21:35:27.007640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.575 [2024-04-24 21:35:27.007657] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.575 [2024-04-24 21:35:27.007669] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.575 [2024-04-24 21:35:27.007697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.575 qpair failed and we were unable to recover it. 00:21:01.575 [2024-04-24 21:35:27.017461] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.575 [2024-04-24 21:35:27.017643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.575 [2024-04-24 21:35:27.017669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.575 [2024-04-24 21:35:27.017683] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.575 [2024-04-24 21:35:27.017695] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.575 [2024-04-24 21:35:27.017722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.575 qpair failed and we were unable to recover it. 00:21:01.575 [2024-04-24 21:35:27.027501] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.575 [2024-04-24 21:35:27.027675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.575 [2024-04-24 21:35:27.027700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.575 [2024-04-24 21:35:27.027715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.575 [2024-04-24 21:35:27.027728] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.575 [2024-04-24 21:35:27.027755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.575 qpair failed and we were unable to recover it. 00:21:01.575 [2024-04-24 21:35:27.037549] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.575 [2024-04-24 21:35:27.037715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.575 [2024-04-24 21:35:27.037741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.575 [2024-04-24 21:35:27.037755] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.575 [2024-04-24 21:35:27.037767] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.575 [2024-04-24 21:35:27.037797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.575 qpair failed and we were unable to recover it. 00:21:01.575 [2024-04-24 21:35:27.047575] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.575 [2024-04-24 21:35:27.047745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.575 [2024-04-24 21:35:27.047771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.576 [2024-04-24 21:35:27.047786] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.576 [2024-04-24 21:35:27.047799] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.576 [2024-04-24 21:35:27.047826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.576 qpair failed and we were unable to recover it. 00:21:01.576 [2024-04-24 21:35:27.057569] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.576 [2024-04-24 21:35:27.057738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.576 [2024-04-24 21:35:27.057764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.576 [2024-04-24 21:35:27.057779] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.576 [2024-04-24 21:35:27.057790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.576 [2024-04-24 21:35:27.057818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.576 qpair failed and we were unable to recover it. 00:21:01.576 [2024-04-24 21:35:27.067662] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.576 [2024-04-24 21:35:27.067827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.576 [2024-04-24 21:35:27.067852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.576 [2024-04-24 21:35:27.067866] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.576 [2024-04-24 21:35:27.067883] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.576 [2024-04-24 21:35:27.067912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.576 qpair failed and we were unable to recover it. 00:21:01.576 [2024-04-24 21:35:27.077719] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.576 [2024-04-24 21:35:27.077882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.576 [2024-04-24 21:35:27.077907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.576 [2024-04-24 21:35:27.077921] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.576 [2024-04-24 21:35:27.077934] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.576 [2024-04-24 21:35:27.077961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.576 qpair failed and we were unable to recover it. 00:21:01.576 [2024-04-24 21:35:27.087672] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.576 [2024-04-24 21:35:27.087827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.576 [2024-04-24 21:35:27.087852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.576 [2024-04-24 21:35:27.087867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.576 [2024-04-24 21:35:27.087878] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.576 [2024-04-24 21:35:27.087906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.576 qpair failed and we were unable to recover it. 00:21:01.576 [2024-04-24 21:35:27.097706] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.576 [2024-04-24 21:35:27.097856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.576 [2024-04-24 21:35:27.097881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.576 [2024-04-24 21:35:27.097896] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.576 [2024-04-24 21:35:27.097908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.576 [2024-04-24 21:35:27.097935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.576 qpair failed and we were unable to recover it. 00:21:01.576 [2024-04-24 21:35:27.107756] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.576 [2024-04-24 21:35:27.107921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.576 [2024-04-24 21:35:27.107947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.576 [2024-04-24 21:35:27.107961] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.576 [2024-04-24 21:35:27.107973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.576 [2024-04-24 21:35:27.108000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.576 qpair failed and we were unable to recover it. 00:21:01.576 [2024-04-24 21:35:27.117749] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.576 [2024-04-24 21:35:27.117922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.576 [2024-04-24 21:35:27.117948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.576 [2024-04-24 21:35:27.117963] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.576 [2024-04-24 21:35:27.117975] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.576 [2024-04-24 21:35:27.118002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.576 qpair failed and we were unable to recover it. 00:21:01.576 [2024-04-24 21:35:27.127811] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.576 [2024-04-24 21:35:27.127965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.576 [2024-04-24 21:35:27.127990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.576 [2024-04-24 21:35:27.128004] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.576 [2024-04-24 21:35:27.128017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.576 [2024-04-24 21:35:27.128044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.576 qpair failed and we were unable to recover it. 00:21:01.576 [2024-04-24 21:35:27.137806] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.576 [2024-04-24 21:35:27.137962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.576 [2024-04-24 21:35:27.137988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.576 [2024-04-24 21:35:27.138003] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.576 [2024-04-24 21:35:27.138015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.576 [2024-04-24 21:35:27.138044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.576 qpair failed and we were unable to recover it. 00:21:01.576 [2024-04-24 21:35:27.147902] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.576 [2024-04-24 21:35:27.148094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.576 [2024-04-24 21:35:27.148120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.576 [2024-04-24 21:35:27.148134] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.576 [2024-04-24 21:35:27.148147] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.576 [2024-04-24 21:35:27.148174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.576 qpair failed and we were unable to recover it. 00:21:01.576 [2024-04-24 21:35:27.157859] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.576 [2024-04-24 21:35:27.158025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.576 [2024-04-24 21:35:27.158050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.576 [2024-04-24 21:35:27.158071] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.576 [2024-04-24 21:35:27.158084] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.576 [2024-04-24 21:35:27.158112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.576 qpair failed and we were unable to recover it. 00:21:01.576 [2024-04-24 21:35:27.167900] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.576 [2024-04-24 21:35:27.168072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.576 [2024-04-24 21:35:27.168097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.576 [2024-04-24 21:35:27.168112] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.576 [2024-04-24 21:35:27.168124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.576 [2024-04-24 21:35:27.168151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.576 qpair failed and we were unable to recover it. 00:21:01.576 [2024-04-24 21:35:27.177909] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.576 [2024-04-24 21:35:27.178069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.576 [2024-04-24 21:35:27.178094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.576 [2024-04-24 21:35:27.178108] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.576 [2024-04-24 21:35:27.178120] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.576 [2024-04-24 21:35:27.178147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.576 qpair failed and we were unable to recover it. 00:21:01.577 [2024-04-24 21:35:27.187963] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.577 [2024-04-24 21:35:27.188123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.577 [2024-04-24 21:35:27.188148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.577 [2024-04-24 21:35:27.188163] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.577 [2024-04-24 21:35:27.188175] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.577 [2024-04-24 21:35:27.188202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.577 qpair failed and we were unable to recover it. 00:21:01.577 [2024-04-24 21:35:27.197966] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.577 [2024-04-24 21:35:27.198127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.577 [2024-04-24 21:35:27.198153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.577 [2024-04-24 21:35:27.198167] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.577 [2024-04-24 21:35:27.198180] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.577 [2024-04-24 21:35:27.198207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.577 qpair failed and we were unable to recover it. 00:21:01.577 [2024-04-24 21:35:27.208010] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.577 [2024-04-24 21:35:27.208166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.577 [2024-04-24 21:35:27.208191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.577 [2024-04-24 21:35:27.208205] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.577 [2024-04-24 21:35:27.208217] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.577 [2024-04-24 21:35:27.208245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.577 qpair failed and we were unable to recover it. 00:21:01.577 [2024-04-24 21:35:27.218029] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.577 [2024-04-24 21:35:27.218184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.577 [2024-04-24 21:35:27.218209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.577 [2024-04-24 21:35:27.218224] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.577 [2024-04-24 21:35:27.218236] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.577 [2024-04-24 21:35:27.218263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.577 qpair failed and we were unable to recover it. 00:21:01.577 [2024-04-24 21:35:27.228061] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.577 [2024-04-24 21:35:27.228220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.577 [2024-04-24 21:35:27.228245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.577 [2024-04-24 21:35:27.228260] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.577 [2024-04-24 21:35:27.228272] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.577 [2024-04-24 21:35:27.228299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.577 qpair failed and we were unable to recover it. 00:21:01.577 [2024-04-24 21:35:27.238105] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.577 [2024-04-24 21:35:27.238268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.577 [2024-04-24 21:35:27.238293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.577 [2024-04-24 21:35:27.238307] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.577 [2024-04-24 21:35:27.238320] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.577 [2024-04-24 21:35:27.238347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.577 qpair failed and we were unable to recover it. 00:21:01.577 [2024-04-24 21:35:27.248116] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.577 [2024-04-24 21:35:27.248270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.577 [2024-04-24 21:35:27.248294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.577 [2024-04-24 21:35:27.248314] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.577 [2024-04-24 21:35:27.248327] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.577 [2024-04-24 21:35:27.248354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.577 qpair failed and we were unable to recover it. 00:21:01.837 [2024-04-24 21:35:27.258167] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.837 [2024-04-24 21:35:27.258322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.837 [2024-04-24 21:35:27.258348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.837 [2024-04-24 21:35:27.258363] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.837 [2024-04-24 21:35:27.258375] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.837 [2024-04-24 21:35:27.258402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.837 qpair failed and we were unable to recover it. 00:21:01.837 [2024-04-24 21:35:27.268224] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.837 [2024-04-24 21:35:27.268381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.837 [2024-04-24 21:35:27.268406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.837 [2024-04-24 21:35:27.268420] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.837 [2024-04-24 21:35:27.268432] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.837 [2024-04-24 21:35:27.268459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.837 qpair failed and we were unable to recover it. 00:21:01.837 [2024-04-24 21:35:27.278203] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.837 [2024-04-24 21:35:27.278373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.837 [2024-04-24 21:35:27.278398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.837 [2024-04-24 21:35:27.278413] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.837 [2024-04-24 21:35:27.278424] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.837 [2024-04-24 21:35:27.278452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.837 qpair failed and we were unable to recover it. 00:21:01.837 [2024-04-24 21:35:27.288228] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.837 [2024-04-24 21:35:27.288392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.837 [2024-04-24 21:35:27.288415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.837 [2024-04-24 21:35:27.288429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.837 [2024-04-24 21:35:27.288442] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.837 [2024-04-24 21:35:27.288468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.837 qpair failed and we were unable to recover it. 00:21:01.837 [2024-04-24 21:35:27.298302] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.837 [2024-04-24 21:35:27.298463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.837 [2024-04-24 21:35:27.298489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.837 [2024-04-24 21:35:27.298504] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.837 [2024-04-24 21:35:27.298516] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.837 [2024-04-24 21:35:27.298543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.837 qpair failed and we were unable to recover it. 00:21:01.837 [2024-04-24 21:35:27.308296] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.837 [2024-04-24 21:35:27.308457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.837 [2024-04-24 21:35:27.308483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.837 [2024-04-24 21:35:27.308498] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.837 [2024-04-24 21:35:27.308510] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.837 [2024-04-24 21:35:27.308537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.837 qpair failed and we were unable to recover it. 00:21:01.837 [2024-04-24 21:35:27.318326] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.837 [2024-04-24 21:35:27.318492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.837 [2024-04-24 21:35:27.318517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.837 [2024-04-24 21:35:27.318532] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.837 [2024-04-24 21:35:27.318544] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.837 [2024-04-24 21:35:27.318572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.837 qpair failed and we were unable to recover it. 00:21:01.837 [2024-04-24 21:35:27.328381] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.837 [2024-04-24 21:35:27.328542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.837 [2024-04-24 21:35:27.328567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.837 [2024-04-24 21:35:27.328582] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.837 [2024-04-24 21:35:27.328594] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.837 [2024-04-24 21:35:27.328622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.837 qpair failed and we were unable to recover it. 00:21:01.837 [2024-04-24 21:35:27.338391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.837 [2024-04-24 21:35:27.338555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.837 [2024-04-24 21:35:27.338581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.838 [2024-04-24 21:35:27.338601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.838 [2024-04-24 21:35:27.338614] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.838 [2024-04-24 21:35:27.338647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.838 qpair failed and we were unable to recover it. 00:21:01.838 [2024-04-24 21:35:27.348430] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.838 [2024-04-24 21:35:27.348592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.838 [2024-04-24 21:35:27.348616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.838 [2024-04-24 21:35:27.348638] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.838 [2024-04-24 21:35:27.348652] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.838 [2024-04-24 21:35:27.348679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.838 qpair failed and we were unable to recover it. 00:21:01.838 [2024-04-24 21:35:27.358448] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.838 [2024-04-24 21:35:27.358610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.838 [2024-04-24 21:35:27.358641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.838 [2024-04-24 21:35:27.358657] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.838 [2024-04-24 21:35:27.358670] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.838 [2024-04-24 21:35:27.358698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.838 qpair failed and we were unable to recover it. 00:21:01.838 [2024-04-24 21:35:27.368469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.838 [2024-04-24 21:35:27.368625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.838 [2024-04-24 21:35:27.368660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.838 [2024-04-24 21:35:27.368675] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.838 [2024-04-24 21:35:27.368687] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.838 [2024-04-24 21:35:27.368714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.838 qpair failed and we were unable to recover it. 00:21:01.838 [2024-04-24 21:35:27.378509] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.838 [2024-04-24 21:35:27.378668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.838 [2024-04-24 21:35:27.378694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.838 [2024-04-24 21:35:27.378709] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.838 [2024-04-24 21:35:27.378721] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.838 [2024-04-24 21:35:27.378748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.838 qpair failed and we were unable to recover it. 00:21:01.838 [2024-04-24 21:35:27.388546] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.838 [2024-04-24 21:35:27.388748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.838 [2024-04-24 21:35:27.388774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.838 [2024-04-24 21:35:27.388790] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.838 [2024-04-24 21:35:27.388802] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.838 [2024-04-24 21:35:27.388831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.838 qpair failed and we were unable to recover it. 00:21:01.838 [2024-04-24 21:35:27.398560] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.838 [2024-04-24 21:35:27.398743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.838 [2024-04-24 21:35:27.398769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.838 [2024-04-24 21:35:27.398784] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.838 [2024-04-24 21:35:27.398796] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.838 [2024-04-24 21:35:27.398823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.838 qpair failed and we were unable to recover it. 00:21:01.838 [2024-04-24 21:35:27.408642] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.838 [2024-04-24 21:35:27.408871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.838 [2024-04-24 21:35:27.408896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.838 [2024-04-24 21:35:27.408911] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.838 [2024-04-24 21:35:27.408923] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.838 [2024-04-24 21:35:27.408950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.838 qpair failed and we were unable to recover it. 00:21:01.838 [2024-04-24 21:35:27.418614] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.838 [2024-04-24 21:35:27.418798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.838 [2024-04-24 21:35:27.418824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.838 [2024-04-24 21:35:27.418838] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.838 [2024-04-24 21:35:27.418851] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.838 [2024-04-24 21:35:27.418878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.838 qpair failed and we were unable to recover it. 00:21:01.838 [2024-04-24 21:35:27.428663] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.838 [2024-04-24 21:35:27.428820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.838 [2024-04-24 21:35:27.428849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.838 [2024-04-24 21:35:27.428865] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.838 [2024-04-24 21:35:27.428877] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.838 [2024-04-24 21:35:27.428905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.838 qpair failed and we were unable to recover it. 00:21:01.838 [2024-04-24 21:35:27.438684] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.838 [2024-04-24 21:35:27.438838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.838 [2024-04-24 21:35:27.438864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.838 [2024-04-24 21:35:27.438878] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.838 [2024-04-24 21:35:27.438890] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.838 [2024-04-24 21:35:27.438918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.838 qpair failed and we were unable to recover it. 00:21:01.838 [2024-04-24 21:35:27.448714] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.838 [2024-04-24 21:35:27.448872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.838 [2024-04-24 21:35:27.448898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.838 [2024-04-24 21:35:27.448913] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.838 [2024-04-24 21:35:27.448925] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.838 [2024-04-24 21:35:27.448952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.838 qpair failed and we were unable to recover it. 00:21:01.838 [2024-04-24 21:35:27.458733] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.838 [2024-04-24 21:35:27.458894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.838 [2024-04-24 21:35:27.458919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.838 [2024-04-24 21:35:27.458934] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.838 [2024-04-24 21:35:27.458946] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.838 [2024-04-24 21:35:27.458973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.838 qpair failed and we were unable to recover it. 00:21:01.838 [2024-04-24 21:35:27.468780] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.838 [2024-04-24 21:35:27.468942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.838 [2024-04-24 21:35:27.468967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.838 [2024-04-24 21:35:27.468981] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.838 [2024-04-24 21:35:27.468994] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.838 [2024-04-24 21:35:27.469029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.838 qpair failed and we were unable to recover it. 00:21:01.839 [2024-04-24 21:35:27.478809] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.839 [2024-04-24 21:35:27.478973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.839 [2024-04-24 21:35:27.478998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.839 [2024-04-24 21:35:27.479013] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.839 [2024-04-24 21:35:27.479025] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.839 [2024-04-24 21:35:27.479053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.839 qpair failed and we were unable to recover it. 00:21:01.839 [2024-04-24 21:35:27.488835] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.839 [2024-04-24 21:35:27.488989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.839 [2024-04-24 21:35:27.489014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.839 [2024-04-24 21:35:27.489029] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.839 [2024-04-24 21:35:27.489041] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.839 [2024-04-24 21:35:27.489068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.839 qpair failed and we were unable to recover it. 00:21:01.839 [2024-04-24 21:35:27.498884] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.839 [2024-04-24 21:35:27.499039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.839 [2024-04-24 21:35:27.499064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.839 [2024-04-24 21:35:27.499079] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.839 [2024-04-24 21:35:27.499091] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.839 [2024-04-24 21:35:27.499118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.839 qpair failed and we were unable to recover it. 00:21:01.839 [2024-04-24 21:35:27.509045] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.839 [2024-04-24 21:35:27.509240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.839 [2024-04-24 21:35:27.509266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.839 [2024-04-24 21:35:27.509280] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.839 [2024-04-24 21:35:27.509292] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:01.839 [2024-04-24 21:35:27.509319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.839 qpair failed and we were unable to recover it. 00:21:02.099 [2024-04-24 21:35:27.518970] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.099 [2024-04-24 21:35:27.519138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.099 [2024-04-24 21:35:27.519170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.099 [2024-04-24 21:35:27.519186] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.099 [2024-04-24 21:35:27.519198] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.099 [2024-04-24 21:35:27.519225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.099 qpair failed and we were unable to recover it. 00:21:02.099 [2024-04-24 21:35:27.528979] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.099 [2024-04-24 21:35:27.529142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.099 [2024-04-24 21:35:27.529168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.099 [2024-04-24 21:35:27.529182] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.099 [2024-04-24 21:35:27.529194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.099 [2024-04-24 21:35:27.529222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.099 qpair failed and we were unable to recover it. 00:21:02.099 [2024-04-24 21:35:27.539137] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.099 [2024-04-24 21:35:27.539326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.099 [2024-04-24 21:35:27.539351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.099 [2024-04-24 21:35:27.539366] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.099 [2024-04-24 21:35:27.539378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.099 [2024-04-24 21:35:27.539405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.099 qpair failed and we were unable to recover it. 00:21:02.099 [2024-04-24 21:35:27.549050] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.099 [2024-04-24 21:35:27.549208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.099 [2024-04-24 21:35:27.549233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.099 [2024-04-24 21:35:27.549247] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.099 [2024-04-24 21:35:27.549260] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.099 [2024-04-24 21:35:27.549287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.099 qpair failed and we were unable to recover it. 00:21:02.099 [2024-04-24 21:35:27.559088] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.099 [2024-04-24 21:35:27.559253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.099 [2024-04-24 21:35:27.559278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.099 [2024-04-24 21:35:27.559293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.099 [2024-04-24 21:35:27.559305] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.099 [2024-04-24 21:35:27.559340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.099 qpair failed and we were unable to recover it. 00:21:02.099 [2024-04-24 21:35:27.569127] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.099 [2024-04-24 21:35:27.569296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.099 [2024-04-24 21:35:27.569322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.099 [2024-04-24 21:35:27.569337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.099 [2024-04-24 21:35:27.569349] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.099 [2024-04-24 21:35:27.569377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.099 qpair failed and we were unable to recover it. 00:21:02.099 [2024-04-24 21:35:27.579151] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.099 [2024-04-24 21:35:27.579309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.099 [2024-04-24 21:35:27.579335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.099 [2024-04-24 21:35:27.579350] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.099 [2024-04-24 21:35:27.579362] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.099 [2024-04-24 21:35:27.579388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.099 qpair failed and we were unable to recover it. 00:21:02.099 [2024-04-24 21:35:27.589189] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.099 [2024-04-24 21:35:27.589346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.099 [2024-04-24 21:35:27.589372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.099 [2024-04-24 21:35:27.589386] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.099 [2024-04-24 21:35:27.589399] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.099 [2024-04-24 21:35:27.589425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.099 qpair failed and we were unable to recover it. 00:21:02.099 [2024-04-24 21:35:27.599248] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.099 [2024-04-24 21:35:27.599473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.099 [2024-04-24 21:35:27.599498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.099 [2024-04-24 21:35:27.599513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.099 [2024-04-24 21:35:27.599526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.099 [2024-04-24 21:35:27.599553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.099 qpair failed and we were unable to recover it. 00:21:02.099 [2024-04-24 21:35:27.609250] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.099 [2024-04-24 21:35:27.609410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.099 [2024-04-24 21:35:27.609442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.099 [2024-04-24 21:35:27.609457] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.099 [2024-04-24 21:35:27.609469] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.099 [2024-04-24 21:35:27.609497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.099 qpair failed and we were unable to recover it. 00:21:02.099 [2024-04-24 21:35:27.619240] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.099 [2024-04-24 21:35:27.619402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.099 [2024-04-24 21:35:27.619428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.099 [2024-04-24 21:35:27.619442] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.099 [2024-04-24 21:35:27.619454] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.099 [2024-04-24 21:35:27.619481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.099 qpair failed and we were unable to recover it. 00:21:02.099 [2024-04-24 21:35:27.629262] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.099 [2024-04-24 21:35:27.629423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.099 [2024-04-24 21:35:27.629448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.099 [2024-04-24 21:35:27.629463] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.099 [2024-04-24 21:35:27.629474] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.099 [2024-04-24 21:35:27.629502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.099 qpair failed and we were unable to recover it. 00:21:02.099 [2024-04-24 21:35:27.639287] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.099 [2024-04-24 21:35:27.639451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.099 [2024-04-24 21:35:27.639476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.100 [2024-04-24 21:35:27.639491] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.100 [2024-04-24 21:35:27.639503] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.100 [2024-04-24 21:35:27.639530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.100 qpair failed and we were unable to recover it. 00:21:02.100 [2024-04-24 21:35:27.649338] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.100 [2024-04-24 21:35:27.649506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.100 [2024-04-24 21:35:27.649532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.100 [2024-04-24 21:35:27.649546] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.100 [2024-04-24 21:35:27.649558] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.100 [2024-04-24 21:35:27.649591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.100 qpair failed and we were unable to recover it. 00:21:02.100 [2024-04-24 21:35:27.659341] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.100 [2024-04-24 21:35:27.659512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.100 [2024-04-24 21:35:27.659537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.100 [2024-04-24 21:35:27.659551] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.100 [2024-04-24 21:35:27.659563] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.100 [2024-04-24 21:35:27.659590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.100 qpair failed and we were unable to recover it. 00:21:02.100 [2024-04-24 21:35:27.669381] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.100 [2024-04-24 21:35:27.669582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.100 [2024-04-24 21:35:27.669607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.100 [2024-04-24 21:35:27.669621] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.100 [2024-04-24 21:35:27.669641] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.100 [2024-04-24 21:35:27.669669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.100 qpair failed and we were unable to recover it. 00:21:02.100 [2024-04-24 21:35:27.679413] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.100 [2024-04-24 21:35:27.679638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.100 [2024-04-24 21:35:27.679664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.100 [2024-04-24 21:35:27.679679] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.100 [2024-04-24 21:35:27.679691] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.100 [2024-04-24 21:35:27.679719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.100 qpair failed and we were unable to recover it. 00:21:02.100 [2024-04-24 21:35:27.689436] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.100 [2024-04-24 21:35:27.689597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.100 [2024-04-24 21:35:27.689622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.100 [2024-04-24 21:35:27.689645] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.100 [2024-04-24 21:35:27.689658] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.100 [2024-04-24 21:35:27.689686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.100 qpair failed and we were unable to recover it. 00:21:02.100 [2024-04-24 21:35:27.699475] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.100 [2024-04-24 21:35:27.699638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.100 [2024-04-24 21:35:27.699668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.100 [2024-04-24 21:35:27.699684] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.100 [2024-04-24 21:35:27.699696] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.100 [2024-04-24 21:35:27.699723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.100 qpair failed and we were unable to recover it. 00:21:02.100 [2024-04-24 21:35:27.709495] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.100 [2024-04-24 21:35:27.709659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.100 [2024-04-24 21:35:27.709684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.100 [2024-04-24 21:35:27.709699] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.100 [2024-04-24 21:35:27.709711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.100 [2024-04-24 21:35:27.709738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.100 qpair failed and we were unable to recover it. 00:21:02.100 [2024-04-24 21:35:27.719523] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.100 [2024-04-24 21:35:27.719698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.100 [2024-04-24 21:35:27.719724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.100 [2024-04-24 21:35:27.719738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.100 [2024-04-24 21:35:27.719750] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.100 [2024-04-24 21:35:27.719778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.100 qpair failed and we were unable to recover it. 00:21:02.100 [2024-04-24 21:35:27.729543] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.100 [2024-04-24 21:35:27.729707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.100 [2024-04-24 21:35:27.729731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.100 [2024-04-24 21:35:27.729745] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.100 [2024-04-24 21:35:27.729757] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.100 [2024-04-24 21:35:27.729785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.100 qpair failed and we were unable to recover it. 00:21:02.100 [2024-04-24 21:35:27.739583] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.100 [2024-04-24 21:35:27.739750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.100 [2024-04-24 21:35:27.739775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.100 [2024-04-24 21:35:27.739789] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.100 [2024-04-24 21:35:27.739807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.100 [2024-04-24 21:35:27.739835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.100 qpair failed and we were unable to recover it. 00:21:02.100 [2024-04-24 21:35:27.749625] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.100 [2024-04-24 21:35:27.749792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.100 [2024-04-24 21:35:27.749818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.100 [2024-04-24 21:35:27.749832] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.100 [2024-04-24 21:35:27.749843] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.100 [2024-04-24 21:35:27.749871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.100 qpair failed and we were unable to recover it. 00:21:02.100 [2024-04-24 21:35:27.759678] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.100 [2024-04-24 21:35:27.759844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.100 [2024-04-24 21:35:27.759870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.100 [2024-04-24 21:35:27.759884] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.100 [2024-04-24 21:35:27.759896] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.100 [2024-04-24 21:35:27.759924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.100 qpair failed and we were unable to recover it. 00:21:02.100 [2024-04-24 21:35:27.769768] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.100 [2024-04-24 21:35:27.769935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.100 [2024-04-24 21:35:27.769960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.100 [2024-04-24 21:35:27.769974] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.100 [2024-04-24 21:35:27.769986] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.100 [2024-04-24 21:35:27.770014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.100 qpair failed and we were unable to recover it. 00:21:02.362 [2024-04-24 21:35:27.779716] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.362 [2024-04-24 21:35:27.779914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.362 [2024-04-24 21:35:27.779939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.362 [2024-04-24 21:35:27.779954] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.362 [2024-04-24 21:35:27.779966] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.362 [2024-04-24 21:35:27.779993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.362 qpair failed and we were unable to recover it. 00:21:02.362 [2024-04-24 21:35:27.789739] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.362 [2024-04-24 21:35:27.789917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.362 [2024-04-24 21:35:27.789942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.362 [2024-04-24 21:35:27.789957] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.362 [2024-04-24 21:35:27.789969] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.362 [2024-04-24 21:35:27.789996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.362 qpair failed and we were unable to recover it. 00:21:02.362 [2024-04-24 21:35:27.799776] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.362 [2024-04-24 21:35:27.799934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.362 [2024-04-24 21:35:27.799960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.362 [2024-04-24 21:35:27.799974] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.362 [2024-04-24 21:35:27.799987] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.362 [2024-04-24 21:35:27.800016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.362 qpair failed and we were unable to recover it. 00:21:02.362 [2024-04-24 21:35:27.809785] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.362 [2024-04-24 21:35:27.809940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.362 [2024-04-24 21:35:27.809965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.362 [2024-04-24 21:35:27.809980] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.362 [2024-04-24 21:35:27.809992] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.362 [2024-04-24 21:35:27.810018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.362 qpair failed and we were unable to recover it. 00:21:02.362 [2024-04-24 21:35:27.819883] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.362 [2024-04-24 21:35:27.820046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.362 [2024-04-24 21:35:27.820072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.362 [2024-04-24 21:35:27.820087] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.362 [2024-04-24 21:35:27.820099] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.362 [2024-04-24 21:35:27.820127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.362 qpair failed and we were unable to recover it. 00:21:02.362 [2024-04-24 21:35:27.829892] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.362 [2024-04-24 21:35:27.830088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.362 [2024-04-24 21:35:27.830113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.362 [2024-04-24 21:35:27.830128] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.362 [2024-04-24 21:35:27.830145] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.362 [2024-04-24 21:35:27.830173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.362 qpair failed and we were unable to recover it. 00:21:02.362 [2024-04-24 21:35:27.839900] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.362 [2024-04-24 21:35:27.840069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.362 [2024-04-24 21:35:27.840094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.362 [2024-04-24 21:35:27.840109] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.362 [2024-04-24 21:35:27.840121] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.362 [2024-04-24 21:35:27.840149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.362 qpair failed and we were unable to recover it. 00:21:02.362 [2024-04-24 21:35:27.849920] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.362 [2024-04-24 21:35:27.850119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.362 [2024-04-24 21:35:27.850145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.362 [2024-04-24 21:35:27.850160] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.362 [2024-04-24 21:35:27.850172] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.362 [2024-04-24 21:35:27.850199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.362 qpair failed and we were unable to recover it. 00:21:02.362 [2024-04-24 21:35:27.859964] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.362 [2024-04-24 21:35:27.860115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.362 [2024-04-24 21:35:27.860140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.362 [2024-04-24 21:35:27.860155] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.363 [2024-04-24 21:35:27.860168] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.363 [2024-04-24 21:35:27.860195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.363 qpair failed and we were unable to recover it. 00:21:02.363 [2024-04-24 21:35:27.870022] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.363 [2024-04-24 21:35:27.870225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.363 [2024-04-24 21:35:27.870250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.363 [2024-04-24 21:35:27.870265] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.363 [2024-04-24 21:35:27.870277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.363 [2024-04-24 21:35:27.870305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.363 qpair failed and we were unable to recover it. 00:21:02.363 [2024-04-24 21:35:27.879972] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.363 [2024-04-24 21:35:27.880138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.363 [2024-04-24 21:35:27.880162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.363 [2024-04-24 21:35:27.880177] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.363 [2024-04-24 21:35:27.880188] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.363 [2024-04-24 21:35:27.880215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.363 qpair failed and we were unable to recover it. 00:21:02.363 [2024-04-24 21:35:27.890028] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.363 [2024-04-24 21:35:27.890187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.363 [2024-04-24 21:35:27.890212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.363 [2024-04-24 21:35:27.890226] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.363 [2024-04-24 21:35:27.890238] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.363 [2024-04-24 21:35:27.890265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.363 qpair failed and we were unable to recover it. 00:21:02.363 [2024-04-24 21:35:27.900159] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.363 [2024-04-24 21:35:27.900319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.363 [2024-04-24 21:35:27.900344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.363 [2024-04-24 21:35:27.900358] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.363 [2024-04-24 21:35:27.900370] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.363 [2024-04-24 21:35:27.900398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.363 qpair failed and we were unable to recover it. 00:21:02.363 [2024-04-24 21:35:27.910114] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.363 [2024-04-24 21:35:27.910287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.363 [2024-04-24 21:35:27.910312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.363 [2024-04-24 21:35:27.910326] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.363 [2024-04-24 21:35:27.910338] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.363 [2024-04-24 21:35:27.910366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.363 qpair failed and we were unable to recover it. 00:21:02.363 [2024-04-24 21:35:27.920187] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.363 [2024-04-24 21:35:27.920371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.363 [2024-04-24 21:35:27.920396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.363 [2024-04-24 21:35:27.920416] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.363 [2024-04-24 21:35:27.920429] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.363 [2024-04-24 21:35:27.920457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.363 qpair failed and we were unable to recover it. 00:21:02.363 [2024-04-24 21:35:27.930136] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.363 [2024-04-24 21:35:27.930291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.363 [2024-04-24 21:35:27.930316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.363 [2024-04-24 21:35:27.930330] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.363 [2024-04-24 21:35:27.930343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.363 [2024-04-24 21:35:27.930369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.363 qpair failed and we were unable to recover it. 00:21:02.363 [2024-04-24 21:35:27.940151] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.363 [2024-04-24 21:35:27.940310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.363 [2024-04-24 21:35:27.940335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.363 [2024-04-24 21:35:27.940350] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.363 [2024-04-24 21:35:27.940361] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.363 [2024-04-24 21:35:27.940389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.363 qpair failed and we were unable to recover it. 00:21:02.363 [2024-04-24 21:35:27.950192] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.363 [2024-04-24 21:35:27.950356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.363 [2024-04-24 21:35:27.950381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.363 [2024-04-24 21:35:27.950395] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.363 [2024-04-24 21:35:27.950407] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.363 [2024-04-24 21:35:27.950434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.363 qpair failed and we were unable to recover it. 00:21:02.363 [2024-04-24 21:35:27.960248] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.363 [2024-04-24 21:35:27.960408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.363 [2024-04-24 21:35:27.960433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.363 [2024-04-24 21:35:27.960447] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.363 [2024-04-24 21:35:27.960460] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.363 [2024-04-24 21:35:27.960487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.363 qpair failed and we were unable to recover it. 00:21:02.363 [2024-04-24 21:35:27.970267] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.363 [2024-04-24 21:35:27.970429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.363 [2024-04-24 21:35:27.970454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.363 [2024-04-24 21:35:27.970469] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.363 [2024-04-24 21:35:27.970481] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.363 [2024-04-24 21:35:27.970508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.363 qpair failed and we were unable to recover it. 00:21:02.363 [2024-04-24 21:35:27.980268] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.363 [2024-04-24 21:35:27.980432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.363 [2024-04-24 21:35:27.980456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.363 [2024-04-24 21:35:27.980471] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.363 [2024-04-24 21:35:27.980483] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.363 [2024-04-24 21:35:27.980510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.363 qpair failed and we were unable to recover it. 00:21:02.363 [2024-04-24 21:35:27.990354] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.363 [2024-04-24 21:35:27.990521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.363 [2024-04-24 21:35:27.990546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.363 [2024-04-24 21:35:27.990560] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.363 [2024-04-24 21:35:27.990572] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.363 [2024-04-24 21:35:27.990599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.363 qpair failed and we were unable to recover it. 00:21:02.364 [2024-04-24 21:35:28.000319] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.364 [2024-04-24 21:35:28.000470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.364 [2024-04-24 21:35:28.000495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.364 [2024-04-24 21:35:28.000509] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.364 [2024-04-24 21:35:28.000522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.364 [2024-04-24 21:35:28.000549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.364 qpair failed and we were unable to recover it. 00:21:02.364 [2024-04-24 21:35:28.010353] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.364 [2024-04-24 21:35:28.010506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.364 [2024-04-24 21:35:28.010531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.364 [2024-04-24 21:35:28.010550] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.364 [2024-04-24 21:35:28.010563] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.364 [2024-04-24 21:35:28.010590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.364 qpair failed and we were unable to recover it. 00:21:02.364 [2024-04-24 21:35:28.020415] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.364 [2024-04-24 21:35:28.020577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.364 [2024-04-24 21:35:28.020603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.364 [2024-04-24 21:35:28.020617] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.364 [2024-04-24 21:35:28.020636] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.364 [2024-04-24 21:35:28.020665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.364 qpair failed and we were unable to recover it. 00:21:02.364 [2024-04-24 21:35:28.030420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.364 [2024-04-24 21:35:28.030580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.364 [2024-04-24 21:35:28.030605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.364 [2024-04-24 21:35:28.030620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.364 [2024-04-24 21:35:28.030639] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.364 [2024-04-24 21:35:28.030667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.364 qpair failed and we were unable to recover it. 00:21:02.624 [2024-04-24 21:35:28.040443] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.624 [2024-04-24 21:35:28.040605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.624 [2024-04-24 21:35:28.040637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.624 [2024-04-24 21:35:28.040654] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.625 [2024-04-24 21:35:28.040666] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.625 [2024-04-24 21:35:28.040694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.625 qpair failed and we were unable to recover it. 00:21:02.625 [2024-04-24 21:35:28.050578] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.625 [2024-04-24 21:35:28.050737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.625 [2024-04-24 21:35:28.050763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.625 [2024-04-24 21:35:28.050778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.625 [2024-04-24 21:35:28.050790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.625 [2024-04-24 21:35:28.050817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.625 qpair failed and we were unable to recover it. 00:21:02.625 [2024-04-24 21:35:28.060495] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.625 [2024-04-24 21:35:28.060669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.625 [2024-04-24 21:35:28.060694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.625 [2024-04-24 21:35:28.060709] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.625 [2024-04-24 21:35:28.060721] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.625 [2024-04-24 21:35:28.060748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.625 qpair failed and we were unable to recover it. 00:21:02.625 [2024-04-24 21:35:28.070543] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.625 [2024-04-24 21:35:28.070740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.625 [2024-04-24 21:35:28.070765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.625 [2024-04-24 21:35:28.070779] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.625 [2024-04-24 21:35:28.070792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.625 [2024-04-24 21:35:28.070820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.625 qpair failed and we were unable to recover it. 00:21:02.625 [2024-04-24 21:35:28.080574] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.625 [2024-04-24 21:35:28.080741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.625 [2024-04-24 21:35:28.080766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.625 [2024-04-24 21:35:28.080781] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.625 [2024-04-24 21:35:28.080793] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.625 [2024-04-24 21:35:28.080820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.625 qpair failed and we were unable to recover it. 00:21:02.625 [2024-04-24 21:35:28.090608] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.625 [2024-04-24 21:35:28.090774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.625 [2024-04-24 21:35:28.090799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.625 [2024-04-24 21:35:28.090814] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.625 [2024-04-24 21:35:28.090826] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.625 [2024-04-24 21:35:28.090854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.625 qpair failed and we were unable to recover it. 00:21:02.625 [2024-04-24 21:35:28.100620] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.625 [2024-04-24 21:35:28.100830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.625 [2024-04-24 21:35:28.100856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.625 [2024-04-24 21:35:28.100876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.625 [2024-04-24 21:35:28.100889] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.625 [2024-04-24 21:35:28.100916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.625 qpair failed and we were unable to recover it. 00:21:02.625 [2024-04-24 21:35:28.110646] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.625 [2024-04-24 21:35:28.110812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.625 [2024-04-24 21:35:28.110837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.625 [2024-04-24 21:35:28.110852] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.625 [2024-04-24 21:35:28.110864] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.625 [2024-04-24 21:35:28.110890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.625 qpair failed and we were unable to recover it. 00:21:02.625 [2024-04-24 21:35:28.120758] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.625 [2024-04-24 21:35:28.120925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.625 [2024-04-24 21:35:28.120951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.625 [2024-04-24 21:35:28.120965] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.625 [2024-04-24 21:35:28.120977] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.625 [2024-04-24 21:35:28.121005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.625 qpair failed and we were unable to recover it. 00:21:02.625 [2024-04-24 21:35:28.130736] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.625 [2024-04-24 21:35:28.130926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.625 [2024-04-24 21:35:28.130952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.625 [2024-04-24 21:35:28.130967] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.625 [2024-04-24 21:35:28.130979] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.625 [2024-04-24 21:35:28.131008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.625 qpair failed and we were unable to recover it. 00:21:02.625 [2024-04-24 21:35:28.140721] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.625 [2024-04-24 21:35:28.140873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.625 [2024-04-24 21:35:28.140899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.625 [2024-04-24 21:35:28.140913] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.625 [2024-04-24 21:35:28.140925] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.625 [2024-04-24 21:35:28.140953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.625 qpair failed and we were unable to recover it. 00:21:02.625 [2024-04-24 21:35:28.150755] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.625 [2024-04-24 21:35:28.150916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.625 [2024-04-24 21:35:28.150941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.625 [2024-04-24 21:35:28.150956] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.625 [2024-04-24 21:35:28.150968] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.625 [2024-04-24 21:35:28.150995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.625 qpair failed and we were unable to recover it. 00:21:02.625 [2024-04-24 21:35:28.160811] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.626 [2024-04-24 21:35:28.160970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.626 [2024-04-24 21:35:28.160995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.626 [2024-04-24 21:35:28.161009] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.626 [2024-04-24 21:35:28.161021] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.626 [2024-04-24 21:35:28.161049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.626 qpair failed and we were unable to recover it. 00:21:02.626 [2024-04-24 21:35:28.170904] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.626 [2024-04-24 21:35:28.171094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.626 [2024-04-24 21:35:28.171119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.626 [2024-04-24 21:35:28.171133] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.626 [2024-04-24 21:35:28.171145] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.626 [2024-04-24 21:35:28.171172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.626 qpair failed and we were unable to recover it. 00:21:02.626 [2024-04-24 21:35:28.180858] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.626 [2024-04-24 21:35:28.181053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.626 [2024-04-24 21:35:28.181079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.626 [2024-04-24 21:35:28.181093] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.626 [2024-04-24 21:35:28.181105] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.626 [2024-04-24 21:35:28.181132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.626 qpair failed and we were unable to recover it. 00:21:02.626 [2024-04-24 21:35:28.190934] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.626 [2024-04-24 21:35:28.191134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.626 [2024-04-24 21:35:28.191177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.626 [2024-04-24 21:35:28.191196] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.626 [2024-04-24 21:35:28.191209] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.626 [2024-04-24 21:35:28.191238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.626 qpair failed and we were unable to recover it. 00:21:02.626 [2024-04-24 21:35:28.200906] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.626 [2024-04-24 21:35:28.201070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.626 [2024-04-24 21:35:28.201096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.626 [2024-04-24 21:35:28.201110] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.626 [2024-04-24 21:35:28.201122] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.626 [2024-04-24 21:35:28.201150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.626 qpair failed and we were unable to recover it. 00:21:02.626 [2024-04-24 21:35:28.210938] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.626 [2024-04-24 21:35:28.211099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.626 [2024-04-24 21:35:28.211124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.626 [2024-04-24 21:35:28.211139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.626 [2024-04-24 21:35:28.211151] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.626 [2024-04-24 21:35:28.211178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.626 qpair failed and we were unable to recover it. 00:21:02.626 [2024-04-24 21:35:28.220962] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.626 [2024-04-24 21:35:28.221171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.626 [2024-04-24 21:35:28.221195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.626 [2024-04-24 21:35:28.221210] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.626 [2024-04-24 21:35:28.221222] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.626 [2024-04-24 21:35:28.221249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.626 qpair failed and we were unable to recover it. 00:21:02.626 [2024-04-24 21:35:28.231080] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.626 [2024-04-24 21:35:28.231242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.626 [2024-04-24 21:35:28.231266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.626 [2024-04-24 21:35:28.231280] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.626 [2024-04-24 21:35:28.231293] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.626 [2024-04-24 21:35:28.231325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.626 qpair failed and we were unable to recover it. 00:21:02.626 [2024-04-24 21:35:28.241046] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.626 [2024-04-24 21:35:28.241234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.626 [2024-04-24 21:35:28.241259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.626 [2024-04-24 21:35:28.241274] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.626 [2024-04-24 21:35:28.241286] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.626 [2024-04-24 21:35:28.241313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.626 qpair failed and we were unable to recover it. 00:21:02.626 [2024-04-24 21:35:28.251030] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.626 [2024-04-24 21:35:28.251196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.626 [2024-04-24 21:35:28.251221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.626 [2024-04-24 21:35:28.251236] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.626 [2024-04-24 21:35:28.251248] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.626 [2024-04-24 21:35:28.251275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.626 qpair failed and we were unable to recover it. 00:21:02.626 [2024-04-24 21:35:28.261101] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.626 [2024-04-24 21:35:28.261260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.626 [2024-04-24 21:35:28.261286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.626 [2024-04-24 21:35:28.261300] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.626 [2024-04-24 21:35:28.261312] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.626 [2024-04-24 21:35:28.261339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.626 qpair failed and we were unable to recover it. 00:21:02.626 [2024-04-24 21:35:28.271093] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.626 [2024-04-24 21:35:28.271248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.627 [2024-04-24 21:35:28.271273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.627 [2024-04-24 21:35:28.271287] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.627 [2024-04-24 21:35:28.271299] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.627 [2024-04-24 21:35:28.271326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.627 qpair failed and we were unable to recover it. 00:21:02.627 [2024-04-24 21:35:28.281150] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.627 [2024-04-24 21:35:28.281315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.627 [2024-04-24 21:35:28.281346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.627 [2024-04-24 21:35:28.281362] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.627 [2024-04-24 21:35:28.281374] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.627 [2024-04-24 21:35:28.281402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.627 qpair failed and we were unable to recover it. 00:21:02.627 [2024-04-24 21:35:28.291150] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.627 [2024-04-24 21:35:28.291307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.627 [2024-04-24 21:35:28.291332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.627 [2024-04-24 21:35:28.291347] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.627 [2024-04-24 21:35:28.291359] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.627 [2024-04-24 21:35:28.291386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.627 qpair failed and we were unable to recover it. 00:21:02.888 [2024-04-24 21:35:28.301206] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.888 [2024-04-24 21:35:28.301364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.888 [2024-04-24 21:35:28.301389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.888 [2024-04-24 21:35:28.301403] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.888 [2024-04-24 21:35:28.301415] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.888 [2024-04-24 21:35:28.301442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.888 qpair failed and we were unable to recover it. 00:21:02.888 [2024-04-24 21:35:28.311256] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.888 [2024-04-24 21:35:28.311417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.888 [2024-04-24 21:35:28.311442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.888 [2024-04-24 21:35:28.311457] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.888 [2024-04-24 21:35:28.311469] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.888 [2024-04-24 21:35:28.311499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.888 qpair failed and we were unable to recover it. 00:21:02.888 [2024-04-24 21:35:28.321251] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.888 [2024-04-24 21:35:28.321452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.888 [2024-04-24 21:35:28.321477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.888 [2024-04-24 21:35:28.321492] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.888 [2024-04-24 21:35:28.321504] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.888 [2024-04-24 21:35:28.321537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.888 qpair failed and we were unable to recover it. 00:21:02.888 [2024-04-24 21:35:28.331318] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.888 [2024-04-24 21:35:28.331478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.888 [2024-04-24 21:35:28.331504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.888 [2024-04-24 21:35:28.331519] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.888 [2024-04-24 21:35:28.331531] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.888 [2024-04-24 21:35:28.331558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.888 qpair failed and we were unable to recover it. 00:21:02.888 [2024-04-24 21:35:28.341334] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.888 [2024-04-24 21:35:28.341522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.888 [2024-04-24 21:35:28.341547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.888 [2024-04-24 21:35:28.341562] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.888 [2024-04-24 21:35:28.341574] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.888 [2024-04-24 21:35:28.341601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.888 qpair failed and we were unable to recover it. 00:21:02.888 [2024-04-24 21:35:28.351330] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.888 [2024-04-24 21:35:28.351503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.888 [2024-04-24 21:35:28.351529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.888 [2024-04-24 21:35:28.351543] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.888 [2024-04-24 21:35:28.351555] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.888 [2024-04-24 21:35:28.351583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.888 qpair failed and we were unable to recover it. 00:21:02.888 [2024-04-24 21:35:28.361477] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.888 [2024-04-24 21:35:28.361674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.888 [2024-04-24 21:35:28.361700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.888 [2024-04-24 21:35:28.361715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.888 [2024-04-24 21:35:28.361727] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.888 [2024-04-24 21:35:28.361755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.888 qpair failed and we were unable to recover it. 00:21:02.888 [2024-04-24 21:35:28.371418] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.888 [2024-04-24 21:35:28.371599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.888 [2024-04-24 21:35:28.371639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.888 [2024-04-24 21:35:28.371658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.888 [2024-04-24 21:35:28.371671] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.888 [2024-04-24 21:35:28.371700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.888 qpair failed and we were unable to recover it. 00:21:02.888 [2024-04-24 21:35:28.381531] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.888 [2024-04-24 21:35:28.381701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.888 [2024-04-24 21:35:28.381727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.888 [2024-04-24 21:35:28.381746] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.888 [2024-04-24 21:35:28.381759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.888 [2024-04-24 21:35:28.381788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.888 qpair failed and we were unable to recover it. 00:21:02.888 [2024-04-24 21:35:28.391469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.888 [2024-04-24 21:35:28.391678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.888 [2024-04-24 21:35:28.391704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.889 [2024-04-24 21:35:28.391719] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.889 [2024-04-24 21:35:28.391731] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.889 [2024-04-24 21:35:28.391759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.889 qpair failed and we were unable to recover it. 00:21:02.889 [2024-04-24 21:35:28.401461] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.889 [2024-04-24 21:35:28.401641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.889 [2024-04-24 21:35:28.401667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.889 [2024-04-24 21:35:28.401681] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.889 [2024-04-24 21:35:28.401693] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.889 [2024-04-24 21:35:28.401721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.889 qpair failed and we were unable to recover it. 00:21:02.889 [2024-04-24 21:35:28.411518] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.889 [2024-04-24 21:35:28.411728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.889 [2024-04-24 21:35:28.411754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.889 [2024-04-24 21:35:28.411768] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.889 [2024-04-24 21:35:28.411780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.889 [2024-04-24 21:35:28.411813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.889 qpair failed and we were unable to recover it. 00:21:02.889 [2024-04-24 21:35:28.421535] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.889 [2024-04-24 21:35:28.421701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.889 [2024-04-24 21:35:28.421727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.889 [2024-04-24 21:35:28.421742] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.889 [2024-04-24 21:35:28.421754] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.889 [2024-04-24 21:35:28.421782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.889 qpair failed and we were unable to recover it. 00:21:02.889 [2024-04-24 21:35:28.431579] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.889 [2024-04-24 21:35:28.431751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.889 [2024-04-24 21:35:28.431776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.889 [2024-04-24 21:35:28.431791] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.889 [2024-04-24 21:35:28.431803] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.889 [2024-04-24 21:35:28.431831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.889 qpair failed and we were unable to recover it. 00:21:02.889 [2024-04-24 21:35:28.441600] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.889 [2024-04-24 21:35:28.441776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.889 [2024-04-24 21:35:28.441802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.889 [2024-04-24 21:35:28.441817] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.889 [2024-04-24 21:35:28.441829] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.889 [2024-04-24 21:35:28.441857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.889 qpair failed and we were unable to recover it. 00:21:02.889 [2024-04-24 21:35:28.451614] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.889 [2024-04-24 21:35:28.451779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.889 [2024-04-24 21:35:28.451805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.889 [2024-04-24 21:35:28.451819] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.889 [2024-04-24 21:35:28.451831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.889 [2024-04-24 21:35:28.451858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.889 qpair failed and we were unable to recover it. 00:21:02.889 [2024-04-24 21:35:28.461681] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.889 [2024-04-24 21:35:28.461875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.889 [2024-04-24 21:35:28.461905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.889 [2024-04-24 21:35:28.461921] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.889 [2024-04-24 21:35:28.461933] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.889 [2024-04-24 21:35:28.461960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.889 qpair failed and we were unable to recover it. 00:21:02.889 [2024-04-24 21:35:28.471671] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.889 [2024-04-24 21:35:28.471831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.889 [2024-04-24 21:35:28.471856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.889 [2024-04-24 21:35:28.471870] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.889 [2024-04-24 21:35:28.471882] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.889 [2024-04-24 21:35:28.471910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.889 qpair failed and we were unable to recover it. 00:21:02.889 [2024-04-24 21:35:28.481728] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.889 [2024-04-24 21:35:28.481886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.889 [2024-04-24 21:35:28.481912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.889 [2024-04-24 21:35:28.481926] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.889 [2024-04-24 21:35:28.481938] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.889 [2024-04-24 21:35:28.481965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.889 qpair failed and we were unable to recover it. 00:21:02.890 [2024-04-24 21:35:28.491734] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.890 [2024-04-24 21:35:28.491950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.890 [2024-04-24 21:35:28.491975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.890 [2024-04-24 21:35:28.491990] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.890 [2024-04-24 21:35:28.492002] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.890 [2024-04-24 21:35:28.492030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.890 qpair failed and we were unable to recover it. 00:21:02.890 [2024-04-24 21:35:28.501760] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.890 [2024-04-24 21:35:28.501923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.890 [2024-04-24 21:35:28.501948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.890 [2024-04-24 21:35:28.501963] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.890 [2024-04-24 21:35:28.501980] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.890 [2024-04-24 21:35:28.502009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.890 qpair failed and we were unable to recover it. 00:21:02.890 [2024-04-24 21:35:28.511796] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.890 [2024-04-24 21:35:28.511968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.890 [2024-04-24 21:35:28.511993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.890 [2024-04-24 21:35:28.512007] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.890 [2024-04-24 21:35:28.512019] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.890 [2024-04-24 21:35:28.512046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.890 qpair failed and we were unable to recover it. 00:21:02.890 [2024-04-24 21:35:28.521849] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.890 [2024-04-24 21:35:28.522025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.890 [2024-04-24 21:35:28.522050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.890 [2024-04-24 21:35:28.522064] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.890 [2024-04-24 21:35:28.522076] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.890 [2024-04-24 21:35:28.522104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.890 qpair failed and we were unable to recover it. 00:21:02.890 [2024-04-24 21:35:28.531859] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.890 [2024-04-24 21:35:28.532046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.890 [2024-04-24 21:35:28.532070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.890 [2024-04-24 21:35:28.532085] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.890 [2024-04-24 21:35:28.532097] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.890 [2024-04-24 21:35:28.532124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.890 qpair failed and we were unable to recover it. 00:21:02.890 [2024-04-24 21:35:28.541868] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.890 [2024-04-24 21:35:28.542032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.890 [2024-04-24 21:35:28.542056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.890 [2024-04-24 21:35:28.542071] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.890 [2024-04-24 21:35:28.542083] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.890 [2024-04-24 21:35:28.542110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.890 qpair failed and we were unable to recover it. 00:21:02.890 [2024-04-24 21:35:28.551944] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.890 [2024-04-24 21:35:28.552138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.890 [2024-04-24 21:35:28.552162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.890 [2024-04-24 21:35:28.552176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.890 [2024-04-24 21:35:28.552189] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.890 [2024-04-24 21:35:28.552216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.890 qpair failed and we were unable to recover it. 00:21:02.890 [2024-04-24 21:35:28.561989] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.890 [2024-04-24 21:35:28.562151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.890 [2024-04-24 21:35:28.562176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.890 [2024-04-24 21:35:28.562190] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.890 [2024-04-24 21:35:28.562203] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:02.890 [2024-04-24 21:35:28.562230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.890 qpair failed and we were unable to recover it. 00:21:03.151 [2024-04-24 21:35:28.571967] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.151 [2024-04-24 21:35:28.572126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.151 [2024-04-24 21:35:28.572151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.151 [2024-04-24 21:35:28.572166] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.151 [2024-04-24 21:35:28.572178] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.151 [2024-04-24 21:35:28.572206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.151 qpair failed and we were unable to recover it. 00:21:03.151 [2024-04-24 21:35:28.582006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.151 [2024-04-24 21:35:28.582177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.151 [2024-04-24 21:35:28.582202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.151 [2024-04-24 21:35:28.582217] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.151 [2024-04-24 21:35:28.582229] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.151 [2024-04-24 21:35:28.582256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.151 qpair failed and we were unable to recover it. 00:21:03.151 [2024-04-24 21:35:28.592028] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.151 [2024-04-24 21:35:28.592218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.151 [2024-04-24 21:35:28.592242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.151 [2024-04-24 21:35:28.592257] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.151 [2024-04-24 21:35:28.592274] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.151 [2024-04-24 21:35:28.592303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.151 qpair failed and we were unable to recover it. 00:21:03.151 [2024-04-24 21:35:28.602018] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.151 [2024-04-24 21:35:28.602175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.151 [2024-04-24 21:35:28.602201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.151 [2024-04-24 21:35:28.602215] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.151 [2024-04-24 21:35:28.602227] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.151 [2024-04-24 21:35:28.602254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.151 qpair failed and we were unable to recover it. 00:21:03.151 [2024-04-24 21:35:28.612056] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.151 [2024-04-24 21:35:28.612209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.151 [2024-04-24 21:35:28.612233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.151 [2024-04-24 21:35:28.612248] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.151 [2024-04-24 21:35:28.612260] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.151 [2024-04-24 21:35:28.612287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.151 qpair failed and we were unable to recover it. 00:21:03.151 [2024-04-24 21:35:28.622182] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.151 [2024-04-24 21:35:28.622337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.151 [2024-04-24 21:35:28.622363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.151 [2024-04-24 21:35:28.622378] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.151 [2024-04-24 21:35:28.622390] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.151 [2024-04-24 21:35:28.622417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.151 qpair failed and we were unable to recover it. 00:21:03.151 [2024-04-24 21:35:28.632135] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.151 [2024-04-24 21:35:28.632300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.151 [2024-04-24 21:35:28.632326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.151 [2024-04-24 21:35:28.632341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.151 [2024-04-24 21:35:28.632353] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.151 [2024-04-24 21:35:28.632382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.151 qpair failed and we were unable to recover it. 00:21:03.151 [2024-04-24 21:35:28.642166] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.151 [2024-04-24 21:35:28.642330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.151 [2024-04-24 21:35:28.642356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.151 [2024-04-24 21:35:28.642371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.151 [2024-04-24 21:35:28.642383] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.151 [2024-04-24 21:35:28.642411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.151 qpair failed and we were unable to recover it. 00:21:03.151 [2024-04-24 21:35:28.652189] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.151 [2024-04-24 21:35:28.652349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.151 [2024-04-24 21:35:28.652374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.151 [2024-04-24 21:35:28.652389] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.151 [2024-04-24 21:35:28.652401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.151 [2024-04-24 21:35:28.652428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.151 qpair failed and we were unable to recover it. 00:21:03.151 [2024-04-24 21:35:28.662250] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.151 [2024-04-24 21:35:28.662407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.151 [2024-04-24 21:35:28.662431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.151 [2024-04-24 21:35:28.662446] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.151 [2024-04-24 21:35:28.662458] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.151 [2024-04-24 21:35:28.662486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.152 qpair failed and we were unable to recover it. 00:21:03.152 [2024-04-24 21:35:28.672298] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.152 [2024-04-24 21:35:28.672470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.152 [2024-04-24 21:35:28.672496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.152 [2024-04-24 21:35:28.672510] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.152 [2024-04-24 21:35:28.672522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.152 [2024-04-24 21:35:28.672550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.152 qpair failed and we were unable to recover it. 00:21:03.152 [2024-04-24 21:35:28.682286] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.152 [2024-04-24 21:35:28.682457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.152 [2024-04-24 21:35:28.682483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.152 [2024-04-24 21:35:28.682502] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.152 [2024-04-24 21:35:28.682515] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.152 [2024-04-24 21:35:28.682543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.152 qpair failed and we were unable to recover it. 00:21:03.152 [2024-04-24 21:35:28.692360] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.152 [2024-04-24 21:35:28.692558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.152 [2024-04-24 21:35:28.692582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.152 [2024-04-24 21:35:28.692597] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.152 [2024-04-24 21:35:28.692609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.152 [2024-04-24 21:35:28.692643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.152 qpair failed and we were unable to recover it. 00:21:03.152 [2024-04-24 21:35:28.702361] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.152 [2024-04-24 21:35:28.702523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.152 [2024-04-24 21:35:28.702549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.152 [2024-04-24 21:35:28.702564] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.152 [2024-04-24 21:35:28.702576] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.152 [2024-04-24 21:35:28.702604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.152 qpair failed and we were unable to recover it. 00:21:03.152 [2024-04-24 21:35:28.712394] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.152 [2024-04-24 21:35:28.712566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.152 [2024-04-24 21:35:28.712592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.152 [2024-04-24 21:35:28.712607] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.152 [2024-04-24 21:35:28.712620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.152 [2024-04-24 21:35:28.712655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.152 qpair failed and we were unable to recover it. 00:21:03.152 [2024-04-24 21:35:28.722386] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.152 [2024-04-24 21:35:28.722595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.152 [2024-04-24 21:35:28.722622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.152 [2024-04-24 21:35:28.722653] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.152 [2024-04-24 21:35:28.722667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.152 [2024-04-24 21:35:28.722696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.152 qpair failed and we were unable to recover it. 00:21:03.152 [2024-04-24 21:35:28.732452] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.152 [2024-04-24 21:35:28.732610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.152 [2024-04-24 21:35:28.732644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.152 [2024-04-24 21:35:28.732660] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.152 [2024-04-24 21:35:28.732672] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.152 [2024-04-24 21:35:28.732700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.152 qpair failed and we were unable to recover it. 00:21:03.152 [2024-04-24 21:35:28.742493] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.152 [2024-04-24 21:35:28.742687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.152 [2024-04-24 21:35:28.742713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.152 [2024-04-24 21:35:28.742728] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.152 [2024-04-24 21:35:28.742740] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.152 [2024-04-24 21:35:28.742768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.152 qpair failed and we were unable to recover it. 00:21:03.152 [2024-04-24 21:35:28.752523] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.152 [2024-04-24 21:35:28.752726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.152 [2024-04-24 21:35:28.752754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.152 [2024-04-24 21:35:28.752769] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.152 [2024-04-24 21:35:28.752785] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.152 [2024-04-24 21:35:28.752815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.152 qpair failed and we were unable to recover it. 00:21:03.152 [2024-04-24 21:35:28.762557] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.152 [2024-04-24 21:35:28.762732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.152 [2024-04-24 21:35:28.762759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.152 [2024-04-24 21:35:28.762774] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.152 [2024-04-24 21:35:28.762786] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.152 [2024-04-24 21:35:28.762815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.152 qpair failed and we were unable to recover it. 00:21:03.152 [2024-04-24 21:35:28.772582] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.152 [2024-04-24 21:35:28.772751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.152 [2024-04-24 21:35:28.772777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.152 [2024-04-24 21:35:28.772797] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.152 [2024-04-24 21:35:28.772810] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.152 [2024-04-24 21:35:28.772839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.152 qpair failed and we were unable to recover it. 00:21:03.152 [2024-04-24 21:35:28.782654] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.152 [2024-04-24 21:35:28.782871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.152 [2024-04-24 21:35:28.782899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.152 [2024-04-24 21:35:28.782914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.152 [2024-04-24 21:35:28.782927] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.152 [2024-04-24 21:35:28.782955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.152 qpair failed and we were unable to recover it. 00:21:03.152 [2024-04-24 21:35:28.792591] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.152 [2024-04-24 21:35:28.792812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.152 [2024-04-24 21:35:28.792839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.152 [2024-04-24 21:35:28.792855] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.152 [2024-04-24 21:35:28.792869] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.152 [2024-04-24 21:35:28.792897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.152 qpair failed and we were unable to recover it. 00:21:03.152 [2024-04-24 21:35:28.802665] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.152 [2024-04-24 21:35:28.802842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.152 [2024-04-24 21:35:28.802869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.153 [2024-04-24 21:35:28.802884] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.153 [2024-04-24 21:35:28.802897] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.153 [2024-04-24 21:35:28.802927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.153 qpair failed and we were unable to recover it. 00:21:03.153 [2024-04-24 21:35:28.812698] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.153 [2024-04-24 21:35:28.812858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.153 [2024-04-24 21:35:28.812882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.153 [2024-04-24 21:35:28.812897] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.153 [2024-04-24 21:35:28.812910] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.153 [2024-04-24 21:35:28.812938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.153 qpair failed and we were unable to recover it. 00:21:03.153 [2024-04-24 21:35:28.822676] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.153 [2024-04-24 21:35:28.822833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.153 [2024-04-24 21:35:28.822859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.153 [2024-04-24 21:35:28.822875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.153 [2024-04-24 21:35:28.822896] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.153 [2024-04-24 21:35:28.822924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.153 qpair failed and we were unable to recover it. 00:21:03.413 [2024-04-24 21:35:28.832760] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.413 [2024-04-24 21:35:28.832966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.413 [2024-04-24 21:35:28.832994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.414 [2024-04-24 21:35:28.833014] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.414 [2024-04-24 21:35:28.833028] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.414 [2024-04-24 21:35:28.833058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.414 qpair failed and we were unable to recover it. 00:21:03.414 [2024-04-24 21:35:28.842747] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.414 [2024-04-24 21:35:28.842911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.414 [2024-04-24 21:35:28.842937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.414 [2024-04-24 21:35:28.842951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.414 [2024-04-24 21:35:28.842964] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.414 [2024-04-24 21:35:28.843007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.414 qpair failed and we were unable to recover it. 00:21:03.414 [2024-04-24 21:35:28.852813] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.414 [2024-04-24 21:35:28.852971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.414 [2024-04-24 21:35:28.852996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.414 [2024-04-24 21:35:28.853011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.414 [2024-04-24 21:35:28.853024] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.414 [2024-04-24 21:35:28.853052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.414 qpair failed and we were unable to recover it. 00:21:03.414 [2024-04-24 21:35:28.862795] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.414 [2024-04-24 21:35:28.862952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.414 [2024-04-24 21:35:28.862976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.414 [2024-04-24 21:35:28.862997] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.414 [2024-04-24 21:35:28.863010] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.414 [2024-04-24 21:35:28.863039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.414 qpair failed and we were unable to recover it. 00:21:03.414 [2024-04-24 21:35:28.872832] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.414 [2024-04-24 21:35:28.873036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.414 [2024-04-24 21:35:28.873060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.414 [2024-04-24 21:35:28.873075] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.414 [2024-04-24 21:35:28.873088] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.414 [2024-04-24 21:35:28.873116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.414 qpair failed and we were unable to recover it. 00:21:03.414 [2024-04-24 21:35:28.882872] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.414 [2024-04-24 21:35:28.883033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.414 [2024-04-24 21:35:28.883057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.414 [2024-04-24 21:35:28.883072] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.414 [2024-04-24 21:35:28.883085] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.414 [2024-04-24 21:35:28.883113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.414 qpair failed and we were unable to recover it. 00:21:03.414 [2024-04-24 21:35:28.892902] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.414 [2024-04-24 21:35:28.893061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.414 [2024-04-24 21:35:28.893085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.414 [2024-04-24 21:35:28.893100] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.414 [2024-04-24 21:35:28.893113] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.414 [2024-04-24 21:35:28.893140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.414 qpair failed and we were unable to recover it. 00:21:03.414 [2024-04-24 21:35:28.902928] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.414 [2024-04-24 21:35:28.903133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.414 [2024-04-24 21:35:28.903158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.414 [2024-04-24 21:35:28.903173] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.414 [2024-04-24 21:35:28.903186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.414 [2024-04-24 21:35:28.903216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.414 qpair failed and we were unable to recover it. 00:21:03.414 [2024-04-24 21:35:28.912979] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.414 [2024-04-24 21:35:28.913144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.414 [2024-04-24 21:35:28.913170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.414 [2024-04-24 21:35:28.913185] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.414 [2024-04-24 21:35:28.913198] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.414 [2024-04-24 21:35:28.913226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.414 qpair failed and we were unable to recover it. 00:21:03.414 [2024-04-24 21:35:28.922989] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.414 [2024-04-24 21:35:28.923166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.414 [2024-04-24 21:35:28.923193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.414 [2024-04-24 21:35:28.923224] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.414 [2024-04-24 21:35:28.923237] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.414 [2024-04-24 21:35:28.923280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.414 qpair failed and we were unable to recover it. 00:21:03.414 [2024-04-24 21:35:28.933047] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.414 [2024-04-24 21:35:28.933210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.414 [2024-04-24 21:35:28.933238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.414 [2024-04-24 21:35:28.933262] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.414 [2024-04-24 21:35:28.933276] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.414 [2024-04-24 21:35:28.933306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.414 qpair failed and we were unable to recover it. 00:21:03.414 [2024-04-24 21:35:28.943015] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.414 [2024-04-24 21:35:28.943174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.414 [2024-04-24 21:35:28.943201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.414 [2024-04-24 21:35:28.943217] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.414 [2024-04-24 21:35:28.943229] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.414 [2024-04-24 21:35:28.943257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.414 qpair failed and we were unable to recover it. 00:21:03.414 [2024-04-24 21:35:28.953136] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.414 [2024-04-24 21:35:28.953297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.414 [2024-04-24 21:35:28.953328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.414 [2024-04-24 21:35:28.953344] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.414 [2024-04-24 21:35:28.953357] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.414 [2024-04-24 21:35:28.953385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.414 qpair failed and we were unable to recover it. 00:21:03.414 [2024-04-24 21:35:28.963090] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.414 [2024-04-24 21:35:28.963250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.414 [2024-04-24 21:35:28.963275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.414 [2024-04-24 21:35:28.963290] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.414 [2024-04-24 21:35:28.963303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.415 [2024-04-24 21:35:28.963331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.415 qpair failed and we were unable to recover it. 00:21:03.415 [2024-04-24 21:35:28.973141] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.415 [2024-04-24 21:35:28.973309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.415 [2024-04-24 21:35:28.973336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.415 [2024-04-24 21:35:28.973351] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.415 [2024-04-24 21:35:28.973364] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.415 [2024-04-24 21:35:28.973392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.415 qpair failed and we were unable to recover it. 00:21:03.415 [2024-04-24 21:35:28.983142] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.415 [2024-04-24 21:35:28.983300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.415 [2024-04-24 21:35:28.983327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.415 [2024-04-24 21:35:28.983343] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.415 [2024-04-24 21:35:28.983355] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.415 [2024-04-24 21:35:28.983385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.415 qpair failed and we were unable to recover it. 00:21:03.415 [2024-04-24 21:35:28.993200] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.415 [2024-04-24 21:35:28.993373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.415 [2024-04-24 21:35:28.993399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.415 [2024-04-24 21:35:28.993414] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.415 [2024-04-24 21:35:28.993426] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.415 [2024-04-24 21:35:28.993454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.415 qpair failed and we were unable to recover it. 00:21:03.415 [2024-04-24 21:35:29.003190] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.415 [2024-04-24 21:35:29.003355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.415 [2024-04-24 21:35:29.003381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.415 [2024-04-24 21:35:29.003397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.415 [2024-04-24 21:35:29.003411] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.415 [2024-04-24 21:35:29.003439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.415 qpair failed and we were unable to recover it. 00:21:03.415 [2024-04-24 21:35:29.013243] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.415 [2024-04-24 21:35:29.013422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.415 [2024-04-24 21:35:29.013448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.415 [2024-04-24 21:35:29.013464] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.415 [2024-04-24 21:35:29.013477] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.415 [2024-04-24 21:35:29.013505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.415 qpair failed and we were unable to recover it. 00:21:03.415 [2024-04-24 21:35:29.023250] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.415 [2024-04-24 21:35:29.023410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.415 [2024-04-24 21:35:29.023436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.415 [2024-04-24 21:35:29.023451] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.415 [2024-04-24 21:35:29.023463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.415 [2024-04-24 21:35:29.023491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.415 qpair failed and we were unable to recover it. 00:21:03.415 [2024-04-24 21:35:29.033297] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.415 [2024-04-24 21:35:29.033460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.415 [2024-04-24 21:35:29.033485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.415 [2024-04-24 21:35:29.033501] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.415 [2024-04-24 21:35:29.033513] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.415 [2024-04-24 21:35:29.033542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.415 qpair failed and we were unable to recover it. 00:21:03.415 [2024-04-24 21:35:29.043311] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.415 [2024-04-24 21:35:29.043491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.415 [2024-04-24 21:35:29.043522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.415 [2024-04-24 21:35:29.043539] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.415 [2024-04-24 21:35:29.043551] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.415 [2024-04-24 21:35:29.043580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.415 qpair failed and we were unable to recover it. 00:21:03.415 [2024-04-24 21:35:29.053362] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.415 [2024-04-24 21:35:29.053520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.415 [2024-04-24 21:35:29.053547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.415 [2024-04-24 21:35:29.053562] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.415 [2024-04-24 21:35:29.053574] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.415 [2024-04-24 21:35:29.053602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.415 qpair failed and we were unable to recover it. 00:21:03.415 [2024-04-24 21:35:29.063378] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.415 [2024-04-24 21:35:29.063538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.415 [2024-04-24 21:35:29.063563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.415 [2024-04-24 21:35:29.063578] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.415 [2024-04-24 21:35:29.063591] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.415 [2024-04-24 21:35:29.063623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.415 qpair failed and we were unable to recover it. 00:21:03.415 [2024-04-24 21:35:29.073414] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.415 [2024-04-24 21:35:29.073619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.415 [2024-04-24 21:35:29.073652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.415 [2024-04-24 21:35:29.073668] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.415 [2024-04-24 21:35:29.073681] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.415 [2024-04-24 21:35:29.073709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.415 qpair failed and we were unable to recover it. 00:21:03.415 [2024-04-24 21:35:29.083438] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.415 [2024-04-24 21:35:29.083648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.415 [2024-04-24 21:35:29.083674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.415 [2024-04-24 21:35:29.083690] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.415 [2024-04-24 21:35:29.083703] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.415 [2024-04-24 21:35:29.083736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.415 qpair failed and we were unable to recover it. 00:21:03.677 [2024-04-24 21:35:29.093469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.677 [2024-04-24 21:35:29.093648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.677 [2024-04-24 21:35:29.093674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.677 [2024-04-24 21:35:29.093689] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.677 [2024-04-24 21:35:29.093702] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.677 [2024-04-24 21:35:29.093731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.677 qpair failed and we were unable to recover it. 00:21:03.677 [2024-04-24 21:35:29.103504] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.677 [2024-04-24 21:35:29.103678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.677 [2024-04-24 21:35:29.103704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.677 [2024-04-24 21:35:29.103720] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.677 [2024-04-24 21:35:29.103732] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.677 [2024-04-24 21:35:29.103759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.677 qpair failed and we were unable to recover it. 00:21:03.677 [2024-04-24 21:35:29.113554] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.677 [2024-04-24 21:35:29.113776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.677 [2024-04-24 21:35:29.113801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.677 [2024-04-24 21:35:29.113817] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.677 [2024-04-24 21:35:29.113831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.677 [2024-04-24 21:35:29.113859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.677 qpair failed and we were unable to recover it. 00:21:03.677 [2024-04-24 21:35:29.123574] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.677 [2024-04-24 21:35:29.123750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.677 [2024-04-24 21:35:29.123776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.677 [2024-04-24 21:35:29.123791] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.677 [2024-04-24 21:35:29.123804] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.677 [2024-04-24 21:35:29.123832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.677 qpair failed and we were unable to recover it. 00:21:03.677 [2024-04-24 21:35:29.133598] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.677 [2024-04-24 21:35:29.133769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.677 [2024-04-24 21:35:29.133801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.677 [2024-04-24 21:35:29.133817] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.677 [2024-04-24 21:35:29.133830] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.677 [2024-04-24 21:35:29.133858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.677 qpair failed and we were unable to recover it. 00:21:03.677 [2024-04-24 21:35:29.143599] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.677 [2024-04-24 21:35:29.143767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.677 [2024-04-24 21:35:29.143794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.677 [2024-04-24 21:35:29.143809] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.677 [2024-04-24 21:35:29.143821] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.677 [2024-04-24 21:35:29.143849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.677 qpair failed and we were unable to recover it. 00:21:03.677 [2024-04-24 21:35:29.153665] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.677 [2024-04-24 21:35:29.153837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.677 [2024-04-24 21:35:29.153863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.677 [2024-04-24 21:35:29.153879] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.677 [2024-04-24 21:35:29.153891] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.677 [2024-04-24 21:35:29.153919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.677 qpair failed and we were unable to recover it. 00:21:03.677 [2024-04-24 21:35:29.163684] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.677 [2024-04-24 21:35:29.163843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.677 [2024-04-24 21:35:29.163869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.678 [2024-04-24 21:35:29.163884] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.678 [2024-04-24 21:35:29.163897] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.678 [2024-04-24 21:35:29.163936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.678 qpair failed and we were unable to recover it. 00:21:03.678 [2024-04-24 21:35:29.173719] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.678 [2024-04-24 21:35:29.173926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.678 [2024-04-24 21:35:29.173952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.678 [2024-04-24 21:35:29.173968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.678 [2024-04-24 21:35:29.173980] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.678 [2024-04-24 21:35:29.174012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.678 qpair failed and we were unable to recover it. 00:21:03.678 [2024-04-24 21:35:29.183712] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.678 [2024-04-24 21:35:29.183881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.678 [2024-04-24 21:35:29.183905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.678 [2024-04-24 21:35:29.183920] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.678 [2024-04-24 21:35:29.183933] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.678 [2024-04-24 21:35:29.183960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.678 qpair failed and we were unable to recover it. 00:21:03.678 [2024-04-24 21:35:29.193804] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.678 [2024-04-24 21:35:29.193976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.678 [2024-04-24 21:35:29.194001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.678 [2024-04-24 21:35:29.194016] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.678 [2024-04-24 21:35:29.194034] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.678 [2024-04-24 21:35:29.194062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.678 qpair failed and we were unable to recover it. 00:21:03.678 [2024-04-24 21:35:29.203794] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.678 [2024-04-24 21:35:29.203960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.678 [2024-04-24 21:35:29.203986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.678 [2024-04-24 21:35:29.204001] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.678 [2024-04-24 21:35:29.204013] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.678 [2024-04-24 21:35:29.204041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.678 qpair failed and we were unable to recover it. 00:21:03.678 [2024-04-24 21:35:29.213901] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.678 [2024-04-24 21:35:29.214073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.678 [2024-04-24 21:35:29.214099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.678 [2024-04-24 21:35:29.214114] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.678 [2024-04-24 21:35:29.214126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.678 [2024-04-24 21:35:29.214153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.678 qpair failed and we were unable to recover it. 00:21:03.678 [2024-04-24 21:35:29.223857] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.678 [2024-04-24 21:35:29.224018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.678 [2024-04-24 21:35:29.224048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.678 [2024-04-24 21:35:29.224064] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.678 [2024-04-24 21:35:29.224077] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.678 [2024-04-24 21:35:29.224105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.678 qpair failed and we were unable to recover it. 00:21:03.678 [2024-04-24 21:35:29.233904] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.678 [2024-04-24 21:35:29.234102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.678 [2024-04-24 21:35:29.234127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.678 [2024-04-24 21:35:29.234143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.678 [2024-04-24 21:35:29.234156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.678 [2024-04-24 21:35:29.234183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.678 qpair failed and we were unable to recover it. 00:21:03.678 [2024-04-24 21:35:29.243922] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.678 [2024-04-24 21:35:29.244089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.678 [2024-04-24 21:35:29.244114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.678 [2024-04-24 21:35:29.244129] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.678 [2024-04-24 21:35:29.244143] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.678 [2024-04-24 21:35:29.244170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.678 qpair failed and we were unable to recover it. 00:21:03.678 [2024-04-24 21:35:29.253918] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.678 [2024-04-24 21:35:29.254078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.678 [2024-04-24 21:35:29.254104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.678 [2024-04-24 21:35:29.254119] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.678 [2024-04-24 21:35:29.254132] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.678 [2024-04-24 21:35:29.254162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.678 qpair failed and we were unable to recover it. 00:21:03.678 [2024-04-24 21:35:29.263947] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.678 [2024-04-24 21:35:29.264107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.678 [2024-04-24 21:35:29.264134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.678 [2024-04-24 21:35:29.264149] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.679 [2024-04-24 21:35:29.264166] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.679 [2024-04-24 21:35:29.264195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.679 qpair failed and we were unable to recover it. 00:21:03.679 [2024-04-24 21:35:29.273986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.679 [2024-04-24 21:35:29.274159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.679 [2024-04-24 21:35:29.274185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.679 [2024-04-24 21:35:29.274200] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.679 [2024-04-24 21:35:29.274213] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.679 [2024-04-24 21:35:29.274241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.679 qpair failed and we were unable to recover it. 00:21:03.679 [2024-04-24 21:35:29.284023] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.679 [2024-04-24 21:35:29.284195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.679 [2024-04-24 21:35:29.284221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.679 [2024-04-24 21:35:29.284237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.679 [2024-04-24 21:35:29.284250] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.679 [2024-04-24 21:35:29.284278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.679 qpair failed and we were unable to recover it. 00:21:03.679 [2024-04-24 21:35:29.294028] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.679 [2024-04-24 21:35:29.294195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.679 [2024-04-24 21:35:29.294222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.679 [2024-04-24 21:35:29.294238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.679 [2024-04-24 21:35:29.294250] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.679 [2024-04-24 21:35:29.294278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.679 qpair failed and we were unable to recover it. 00:21:03.679 [2024-04-24 21:35:29.304058] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.679 [2024-04-24 21:35:29.304240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.679 [2024-04-24 21:35:29.304267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.679 [2024-04-24 21:35:29.304286] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.679 [2024-04-24 21:35:29.304300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.679 [2024-04-24 21:35:29.304329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.679 qpair failed and we were unable to recover it. 00:21:03.679 [2024-04-24 21:35:29.314081] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.679 [2024-04-24 21:35:29.314291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.679 [2024-04-24 21:35:29.314318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.679 [2024-04-24 21:35:29.314333] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.679 [2024-04-24 21:35:29.314346] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.679 [2024-04-24 21:35:29.314374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.679 qpair failed and we were unable to recover it. 00:21:03.679 [2024-04-24 21:35:29.324126] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.679 [2024-04-24 21:35:29.324292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.679 [2024-04-24 21:35:29.324319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.679 [2024-04-24 21:35:29.324334] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.679 [2024-04-24 21:35:29.324346] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.679 [2024-04-24 21:35:29.324390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.679 qpair failed and we were unable to recover it. 00:21:03.679 [2024-04-24 21:35:29.334162] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.679 [2024-04-24 21:35:29.334333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.679 [2024-04-24 21:35:29.334358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.679 [2024-04-24 21:35:29.334374] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.679 [2024-04-24 21:35:29.334386] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.679 [2024-04-24 21:35:29.334414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.679 qpair failed and we were unable to recover it. 00:21:03.679 [2024-04-24 21:35:29.344150] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.679 [2024-04-24 21:35:29.344305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.679 [2024-04-24 21:35:29.344330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.679 [2024-04-24 21:35:29.344346] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.679 [2024-04-24 21:35:29.344358] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.679 [2024-04-24 21:35:29.344386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.679 qpair failed and we were unable to recover it. 00:21:03.939 [2024-04-24 21:35:29.354245] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.939 [2024-04-24 21:35:29.354418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.939 [2024-04-24 21:35:29.354443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.939 [2024-04-24 21:35:29.354459] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.939 [2024-04-24 21:35:29.354477] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.939 [2024-04-24 21:35:29.354506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.939 qpair failed and we were unable to recover it. 00:21:03.939 [2024-04-24 21:35:29.364264] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.939 [2024-04-24 21:35:29.364445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.939 [2024-04-24 21:35:29.364471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.939 [2024-04-24 21:35:29.364487] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.939 [2024-04-24 21:35:29.364499] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.939 [2024-04-24 21:35:29.364528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.939 qpair failed and we were unable to recover it. 00:21:03.939 [2024-04-24 21:35:29.374269] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.939 [2024-04-24 21:35:29.374432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.939 [2024-04-24 21:35:29.374459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.939 [2024-04-24 21:35:29.374474] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.939 [2024-04-24 21:35:29.374487] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.939 [2024-04-24 21:35:29.374517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.939 qpair failed and we were unable to recover it. 00:21:03.939 [2024-04-24 21:35:29.384272] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.939 [2024-04-24 21:35:29.384482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.939 [2024-04-24 21:35:29.384508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.939 [2024-04-24 21:35:29.384524] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.939 [2024-04-24 21:35:29.384537] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.939 [2024-04-24 21:35:29.384567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.939 qpair failed and we were unable to recover it. 00:21:03.939 [2024-04-24 21:35:29.394311] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.939 [2024-04-24 21:35:29.394471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.939 [2024-04-24 21:35:29.394497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.939 [2024-04-24 21:35:29.394512] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.939 [2024-04-24 21:35:29.394524] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.939 [2024-04-24 21:35:29.394552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.939 qpair failed and we were unable to recover it. 00:21:03.939 [2024-04-24 21:35:29.404337] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.939 [2024-04-24 21:35:29.404510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.939 [2024-04-24 21:35:29.404537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.939 [2024-04-24 21:35:29.404552] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.939 [2024-04-24 21:35:29.404564] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.939 [2024-04-24 21:35:29.404607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.939 qpair failed and we were unable to recover it. 00:21:03.939 [2024-04-24 21:35:29.414399] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.939 [2024-04-24 21:35:29.414614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.939 [2024-04-24 21:35:29.414646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.939 [2024-04-24 21:35:29.414663] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.939 [2024-04-24 21:35:29.414676] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.940 [2024-04-24 21:35:29.414704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.940 qpair failed and we were unable to recover it. 00:21:03.940 [2024-04-24 21:35:29.424390] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.940 [2024-04-24 21:35:29.424560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.940 [2024-04-24 21:35:29.424586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.940 [2024-04-24 21:35:29.424602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.940 [2024-04-24 21:35:29.424614] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.940 [2024-04-24 21:35:29.424651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.940 qpair failed and we were unable to recover it. 00:21:03.940 [2024-04-24 21:35:29.434441] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.940 [2024-04-24 21:35:29.434639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.940 [2024-04-24 21:35:29.434665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.940 [2024-04-24 21:35:29.434681] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.940 [2024-04-24 21:35:29.434694] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.940 [2024-04-24 21:35:29.434724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.940 qpair failed and we were unable to recover it. 00:21:03.940 [2024-04-24 21:35:29.444537] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.940 [2024-04-24 21:35:29.444735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.940 [2024-04-24 21:35:29.444762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.940 [2024-04-24 21:35:29.444778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.940 [2024-04-24 21:35:29.444796] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.940 [2024-04-24 21:35:29.444825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.940 qpair failed and we were unable to recover it. 00:21:03.940 [2024-04-24 21:35:29.454495] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.940 [2024-04-24 21:35:29.454693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.940 [2024-04-24 21:35:29.454719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.940 [2024-04-24 21:35:29.454734] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.940 [2024-04-24 21:35:29.454747] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.940 [2024-04-24 21:35:29.454775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.940 qpair failed and we were unable to recover it. 00:21:03.940 [2024-04-24 21:35:29.464500] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.940 [2024-04-24 21:35:29.464707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.940 [2024-04-24 21:35:29.464733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.940 [2024-04-24 21:35:29.464748] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.940 [2024-04-24 21:35:29.464761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.940 [2024-04-24 21:35:29.464789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.940 qpair failed and we were unable to recover it. 00:21:03.940 [2024-04-24 21:35:29.474555] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.940 [2024-04-24 21:35:29.474725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.940 [2024-04-24 21:35:29.474751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.940 [2024-04-24 21:35:29.474766] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.940 [2024-04-24 21:35:29.474778] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.940 [2024-04-24 21:35:29.474806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.940 qpair failed and we were unable to recover it. 00:21:03.940 [2024-04-24 21:35:29.484585] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.940 [2024-04-24 21:35:29.484766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.940 [2024-04-24 21:35:29.484792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.940 [2024-04-24 21:35:29.484808] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.940 [2024-04-24 21:35:29.484820] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.940 [2024-04-24 21:35:29.484848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.940 qpair failed and we were unable to recover it. 00:21:03.940 [2024-04-24 21:35:29.494609] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.940 [2024-04-24 21:35:29.494784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.940 [2024-04-24 21:35:29.494810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.940 [2024-04-24 21:35:29.494825] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.940 [2024-04-24 21:35:29.494838] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.940 [2024-04-24 21:35:29.494867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.940 qpair failed and we were unable to recover it. 00:21:03.940 [2024-04-24 21:35:29.504608] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.940 [2024-04-24 21:35:29.504777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.940 [2024-04-24 21:35:29.504804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.940 [2024-04-24 21:35:29.504819] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.940 [2024-04-24 21:35:29.504832] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.940 [2024-04-24 21:35:29.504859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.940 qpair failed and we were unable to recover it. 00:21:03.940 [2024-04-24 21:35:29.514657] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.940 [2024-04-24 21:35:29.514843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.940 [2024-04-24 21:35:29.514868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.940 [2024-04-24 21:35:29.514884] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.940 [2024-04-24 21:35:29.514896] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.940 [2024-04-24 21:35:29.514924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.940 qpair failed and we were unable to recover it. 00:21:03.940 [2024-04-24 21:35:29.524759] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.940 [2024-04-24 21:35:29.524935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.940 [2024-04-24 21:35:29.524961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.940 [2024-04-24 21:35:29.524976] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.940 [2024-04-24 21:35:29.524988] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.940 [2024-04-24 21:35:29.525016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.940 qpair failed and we were unable to recover it. 00:21:03.940 [2024-04-24 21:35:29.534703] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.940 [2024-04-24 21:35:29.534866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.940 [2024-04-24 21:35:29.534891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.940 [2024-04-24 21:35:29.534912] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.940 [2024-04-24 21:35:29.534926] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.940 [2024-04-24 21:35:29.534954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.940 qpair failed and we were unable to recover it. 00:21:03.940 [2024-04-24 21:35:29.544734] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.940 [2024-04-24 21:35:29.544900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.940 [2024-04-24 21:35:29.544926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.940 [2024-04-24 21:35:29.544941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.940 [2024-04-24 21:35:29.544953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.940 [2024-04-24 21:35:29.544980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.940 qpair failed and we were unable to recover it. 00:21:03.940 [2024-04-24 21:35:29.554755] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.940 [2024-04-24 21:35:29.554921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.940 [2024-04-24 21:35:29.554947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.940 [2024-04-24 21:35:29.554961] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.940 [2024-04-24 21:35:29.554974] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.940 [2024-04-24 21:35:29.555001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.940 qpair failed and we were unable to recover it. 00:21:03.940 [2024-04-24 21:35:29.564776] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.940 [2024-04-24 21:35:29.564951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.940 [2024-04-24 21:35:29.564976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.940 [2024-04-24 21:35:29.564992] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.940 [2024-04-24 21:35:29.565004] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.940 [2024-04-24 21:35:29.565033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.940 qpair failed and we were unable to recover it. 00:21:03.940 [2024-04-24 21:35:29.574802] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.940 [2024-04-24 21:35:29.574963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.940 [2024-04-24 21:35:29.574989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.940 [2024-04-24 21:35:29.575004] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.940 [2024-04-24 21:35:29.575017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.940 [2024-04-24 21:35:29.575045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.940 qpair failed and we were unable to recover it. 00:21:03.940 [2024-04-24 21:35:29.584856] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.940 [2024-04-24 21:35:29.585018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.940 [2024-04-24 21:35:29.585043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.940 [2024-04-24 21:35:29.585058] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.940 [2024-04-24 21:35:29.585071] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.940 [2024-04-24 21:35:29.585099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.940 qpair failed and we were unable to recover it. 00:21:03.940 [2024-04-24 21:35:29.594930] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.940 [2024-04-24 21:35:29.595108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.940 [2024-04-24 21:35:29.595136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.940 [2024-04-24 21:35:29.595155] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.940 [2024-04-24 21:35:29.595168] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.940 [2024-04-24 21:35:29.595197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.940 qpair failed and we were unable to recover it. 00:21:03.940 [2024-04-24 21:35:29.604927] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.940 [2024-04-24 21:35:29.605091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.940 [2024-04-24 21:35:29.605118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.940 [2024-04-24 21:35:29.605133] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.940 [2024-04-24 21:35:29.605146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:03.940 [2024-04-24 21:35:29.605174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:03.940 qpair failed and we were unable to recover it. 00:21:03.940 [2024-04-24 21:35:29.614949] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.201 [2024-04-24 21:35:29.615129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.201 [2024-04-24 21:35:29.615155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.201 [2024-04-24 21:35:29.615171] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.201 [2024-04-24 21:35:29.615184] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.201 [2024-04-24 21:35:29.615212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.201 qpair failed and we were unable to recover it. 00:21:04.201 [2024-04-24 21:35:29.624980] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.201 [2024-04-24 21:35:29.625140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.201 [2024-04-24 21:35:29.625166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.201 [2024-04-24 21:35:29.625187] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.201 [2024-04-24 21:35:29.625201] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.201 [2024-04-24 21:35:29.625229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.201 qpair failed and we were unable to recover it. 00:21:04.201 [2024-04-24 21:35:29.635000] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.201 [2024-04-24 21:35:29.635171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.201 [2024-04-24 21:35:29.635197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.201 [2024-04-24 21:35:29.635212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.201 [2024-04-24 21:35:29.635225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.201 [2024-04-24 21:35:29.635253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.201 qpair failed and we were unable to recover it. 00:21:04.201 [2024-04-24 21:35:29.645025] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.201 [2024-04-24 21:35:29.645195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.201 [2024-04-24 21:35:29.645221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.201 [2024-04-24 21:35:29.645237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.201 [2024-04-24 21:35:29.645249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.201 [2024-04-24 21:35:29.645278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.201 qpair failed and we were unable to recover it. 00:21:04.201 [2024-04-24 21:35:29.655073] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.201 [2024-04-24 21:35:29.655236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.201 [2024-04-24 21:35:29.655262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.201 [2024-04-24 21:35:29.655277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.201 [2024-04-24 21:35:29.655290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.201 [2024-04-24 21:35:29.655318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.201 qpair failed and we were unable to recover it. 00:21:04.201 [2024-04-24 21:35:29.665091] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.201 [2024-04-24 21:35:29.665267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.201 [2024-04-24 21:35:29.665292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.201 [2024-04-24 21:35:29.665306] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.201 [2024-04-24 21:35:29.665319] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.201 [2024-04-24 21:35:29.665347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.201 qpair failed and we were unable to recover it. 00:21:04.201 [2024-04-24 21:35:29.675186] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.201 [2024-04-24 21:35:29.675362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.201 [2024-04-24 21:35:29.675386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.201 [2024-04-24 21:35:29.675401] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.201 [2024-04-24 21:35:29.675414] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.201 [2024-04-24 21:35:29.675442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.201 qpair failed and we were unable to recover it. 00:21:04.201 [2024-04-24 21:35:29.685169] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.201 [2024-04-24 21:35:29.685365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.201 [2024-04-24 21:35:29.685390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.201 [2024-04-24 21:35:29.685406] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.201 [2024-04-24 21:35:29.685419] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.201 [2024-04-24 21:35:29.685448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.201 qpair failed and we were unable to recover it. 00:21:04.202 [2024-04-24 21:35:29.695161] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.202 [2024-04-24 21:35:29.695325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.202 [2024-04-24 21:35:29.695350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.202 [2024-04-24 21:35:29.695364] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.202 [2024-04-24 21:35:29.695377] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.202 [2024-04-24 21:35:29.695406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.202 qpair failed and we were unable to recover it. 00:21:04.202 [2024-04-24 21:35:29.705189] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.202 [2024-04-24 21:35:29.705361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.202 [2024-04-24 21:35:29.705386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.202 [2024-04-24 21:35:29.705400] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.202 [2024-04-24 21:35:29.705413] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.202 [2024-04-24 21:35:29.705442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.202 qpair failed and we were unable to recover it. 00:21:04.202 [2024-04-24 21:35:29.715301] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.202 [2024-04-24 21:35:29.715483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.202 [2024-04-24 21:35:29.715508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.202 [2024-04-24 21:35:29.715527] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.202 [2024-04-24 21:35:29.715542] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.202 [2024-04-24 21:35:29.715571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.202 qpair failed and we were unable to recover it. 00:21:04.202 [2024-04-24 21:35:29.725306] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.202 [2024-04-24 21:35:29.725487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.202 [2024-04-24 21:35:29.725512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.202 [2024-04-24 21:35:29.725527] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.202 [2024-04-24 21:35:29.725556] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.202 [2024-04-24 21:35:29.725586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.202 qpair failed and we were unable to recover it. 00:21:04.202 [2024-04-24 21:35:29.735313] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.202 [2024-04-24 21:35:29.735472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.202 [2024-04-24 21:35:29.735497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.202 [2024-04-24 21:35:29.735511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.202 [2024-04-24 21:35:29.735525] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.202 [2024-04-24 21:35:29.735553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.202 qpair failed and we were unable to recover it. 00:21:04.202 [2024-04-24 21:35:29.745308] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.202 [2024-04-24 21:35:29.745469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.202 [2024-04-24 21:35:29.745494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.202 [2024-04-24 21:35:29.745508] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.202 [2024-04-24 21:35:29.745521] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.202 [2024-04-24 21:35:29.745549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.202 qpair failed and we were unable to recover it. 00:21:04.202 [2024-04-24 21:35:29.755387] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.202 [2024-04-24 21:35:29.755548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.202 [2024-04-24 21:35:29.755572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.202 [2024-04-24 21:35:29.755587] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.202 [2024-04-24 21:35:29.755600] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.202 [2024-04-24 21:35:29.755634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.202 qpair failed and we were unable to recover it. 00:21:04.202 [2024-04-24 21:35:29.765397] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.202 [2024-04-24 21:35:29.765572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.202 [2024-04-24 21:35:29.765597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.202 [2024-04-24 21:35:29.765611] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.202 [2024-04-24 21:35:29.765624] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.202 [2024-04-24 21:35:29.765661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.202 qpair failed and we were unable to recover it. 00:21:04.202 [2024-04-24 21:35:29.775410] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.202 [2024-04-24 21:35:29.775569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.202 [2024-04-24 21:35:29.775594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.202 [2024-04-24 21:35:29.775609] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.202 [2024-04-24 21:35:29.775622] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.202 [2024-04-24 21:35:29.775658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.202 qpair failed and we were unable to recover it. 00:21:04.202 [2024-04-24 21:35:29.785423] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.202 [2024-04-24 21:35:29.785582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.202 [2024-04-24 21:35:29.785607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.202 [2024-04-24 21:35:29.785622] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.202 [2024-04-24 21:35:29.785647] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.202 [2024-04-24 21:35:29.785678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.202 qpair failed and we were unable to recover it. 00:21:04.202 [2024-04-24 21:35:29.795478] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.202 [2024-04-24 21:35:29.795646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.202 [2024-04-24 21:35:29.795673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.202 [2024-04-24 21:35:29.795688] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.202 [2024-04-24 21:35:29.795701] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.202 [2024-04-24 21:35:29.795729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.202 qpair failed and we were unable to recover it. 00:21:04.202 [2024-04-24 21:35:29.805501] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.202 [2024-04-24 21:35:29.805706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.202 [2024-04-24 21:35:29.805739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.202 [2024-04-24 21:35:29.805756] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.202 [2024-04-24 21:35:29.805768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.202 [2024-04-24 21:35:29.805799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.202 qpair failed and we were unable to recover it. 00:21:04.202 [2024-04-24 21:35:29.815552] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.202 [2024-04-24 21:35:29.815727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.202 [2024-04-24 21:35:29.815753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.202 [2024-04-24 21:35:29.815769] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.202 [2024-04-24 21:35:29.815782] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.202 [2024-04-24 21:35:29.815811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.202 qpair failed and we were unable to recover it. 00:21:04.202 [2024-04-24 21:35:29.825535] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.202 [2024-04-24 21:35:29.825706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.202 [2024-04-24 21:35:29.825733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.203 [2024-04-24 21:35:29.825748] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.203 [2024-04-24 21:35:29.825761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.203 [2024-04-24 21:35:29.825790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.203 qpair failed and we were unable to recover it. 00:21:04.203 [2024-04-24 21:35:29.835618] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.203 [2024-04-24 21:35:29.835789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.203 [2024-04-24 21:35:29.835814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.203 [2024-04-24 21:35:29.835830] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.203 [2024-04-24 21:35:29.835842] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.203 [2024-04-24 21:35:29.835871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.203 qpair failed and we were unable to recover it. 00:21:04.203 [2024-04-24 21:35:29.845625] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.203 [2024-04-24 21:35:29.845820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.203 [2024-04-24 21:35:29.845846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.203 [2024-04-24 21:35:29.845861] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.203 [2024-04-24 21:35:29.845874] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.203 [2024-04-24 21:35:29.845908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.203 qpair failed and we were unable to recover it. 00:21:04.203 [2024-04-24 21:35:29.855644] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.203 [2024-04-24 21:35:29.855804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.203 [2024-04-24 21:35:29.855830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.203 [2024-04-24 21:35:29.855845] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.203 [2024-04-24 21:35:29.855858] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.203 [2024-04-24 21:35:29.855886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.203 qpair failed and we were unable to recover it. 00:21:04.203 [2024-04-24 21:35:29.865662] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.203 [2024-04-24 21:35:29.865824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.203 [2024-04-24 21:35:29.865850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.203 [2024-04-24 21:35:29.865866] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.203 [2024-04-24 21:35:29.865879] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.203 [2024-04-24 21:35:29.865907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.203 qpair failed and we were unable to recover it. 00:21:04.203 [2024-04-24 21:35:29.875724] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.203 [2024-04-24 21:35:29.875920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.203 [2024-04-24 21:35:29.875949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.203 [2024-04-24 21:35:29.875968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.203 [2024-04-24 21:35:29.875982] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.203 [2024-04-24 21:35:29.876011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.203 qpair failed and we were unable to recover it. 00:21:04.463 [2024-04-24 21:35:29.885731] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.463 [2024-04-24 21:35:29.885903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.463 [2024-04-24 21:35:29.885930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.463 [2024-04-24 21:35:29.885946] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.463 [2024-04-24 21:35:29.885959] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.463 [2024-04-24 21:35:29.885987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.463 qpair failed and we were unable to recover it. 00:21:04.463 [2024-04-24 21:35:29.895747] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.463 [2024-04-24 21:35:29.895909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.463 [2024-04-24 21:35:29.895942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.463 [2024-04-24 21:35:29.895958] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.463 [2024-04-24 21:35:29.895971] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.463 [2024-04-24 21:35:29.895999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.463 qpair failed and we were unable to recover it. 00:21:04.463 [2024-04-24 21:35:29.905789] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.463 [2024-04-24 21:35:29.905951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.463 [2024-04-24 21:35:29.905976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.463 [2024-04-24 21:35:29.905991] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.463 [2024-04-24 21:35:29.906004] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.463 [2024-04-24 21:35:29.906031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.463 qpair failed and we were unable to recover it. 00:21:04.463 [2024-04-24 21:35:29.915828] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.463 [2024-04-24 21:35:29.916011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.463 [2024-04-24 21:35:29.916037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.463 [2024-04-24 21:35:29.916053] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.463 [2024-04-24 21:35:29.916065] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.463 [2024-04-24 21:35:29.916093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.463 qpair failed and we were unable to recover it. 00:21:04.463 [2024-04-24 21:35:29.925839] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.463 [2024-04-24 21:35:29.926037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.463 [2024-04-24 21:35:29.926063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.463 [2024-04-24 21:35:29.926078] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.463 [2024-04-24 21:35:29.926091] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.463 [2024-04-24 21:35:29.926119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.463 qpair failed and we were unable to recover it. 00:21:04.463 [2024-04-24 21:35:29.935877] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.463 [2024-04-24 21:35:29.936065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.464 [2024-04-24 21:35:29.936093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.464 [2024-04-24 21:35:29.936109] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.464 [2024-04-24 21:35:29.936122] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.464 [2024-04-24 21:35:29.936156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.464 qpair failed and we were unable to recover it. 00:21:04.464 [2024-04-24 21:35:29.945927] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.464 [2024-04-24 21:35:29.946132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.464 [2024-04-24 21:35:29.946157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.464 [2024-04-24 21:35:29.946172] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.464 [2024-04-24 21:35:29.946185] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.464 [2024-04-24 21:35:29.946214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.464 qpair failed and we were unable to recover it. 00:21:04.464 [2024-04-24 21:35:29.955915] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.464 [2024-04-24 21:35:29.956088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.464 [2024-04-24 21:35:29.956113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.464 [2024-04-24 21:35:29.956128] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.464 [2024-04-24 21:35:29.956140] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.464 [2024-04-24 21:35:29.956169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.464 qpair failed and we were unable to recover it. 00:21:04.464 [2024-04-24 21:35:29.965965] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.464 [2024-04-24 21:35:29.966129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.464 [2024-04-24 21:35:29.966154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.464 [2024-04-24 21:35:29.966170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.464 [2024-04-24 21:35:29.966182] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.464 [2024-04-24 21:35:29.966212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.464 qpair failed and we were unable to recover it. 00:21:04.464 [2024-04-24 21:35:29.975988] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.464 [2024-04-24 21:35:29.976147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.464 [2024-04-24 21:35:29.976172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.464 [2024-04-24 21:35:29.976187] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.464 [2024-04-24 21:35:29.976199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.464 [2024-04-24 21:35:29.976227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.464 qpair failed and we were unable to recover it. 00:21:04.464 [2024-04-24 21:35:29.986004] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.464 [2024-04-24 21:35:29.986168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.464 [2024-04-24 21:35:29.986199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.464 [2024-04-24 21:35:29.986214] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.464 [2024-04-24 21:35:29.986227] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.464 [2024-04-24 21:35:29.986255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.464 qpair failed and we were unable to recover it. 00:21:04.464 [2024-04-24 21:35:29.996078] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.464 [2024-04-24 21:35:29.996287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.464 [2024-04-24 21:35:29.996325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.464 [2024-04-24 21:35:29.996340] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.464 [2024-04-24 21:35:29.996354] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.464 [2024-04-24 21:35:29.996382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.464 qpair failed and we were unable to recover it. 00:21:04.464 [2024-04-24 21:35:30.006114] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.464 [2024-04-24 21:35:30.006326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.464 [2024-04-24 21:35:30.006356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.464 [2024-04-24 21:35:30.006374] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.464 [2024-04-24 21:35:30.006388] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.464 [2024-04-24 21:35:30.006417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.464 qpair failed and we were unable to recover it. 00:21:04.464 [2024-04-24 21:35:30.016128] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.464 [2024-04-24 21:35:30.016328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.464 [2024-04-24 21:35:30.016356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.464 [2024-04-24 21:35:30.016371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.464 [2024-04-24 21:35:30.016385] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.464 [2024-04-24 21:35:30.016416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.464 qpair failed and we were unable to recover it. 00:21:04.464 [2024-04-24 21:35:30.026140] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.464 [2024-04-24 21:35:30.026303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.464 [2024-04-24 21:35:30.026329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.464 [2024-04-24 21:35:30.026345] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.464 [2024-04-24 21:35:30.026364] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.464 [2024-04-24 21:35:30.026395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.464 qpair failed and we were unable to recover it. 00:21:04.464 [2024-04-24 21:35:30.036198] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.464 [2024-04-24 21:35:30.036401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.464 [2024-04-24 21:35:30.036428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.464 [2024-04-24 21:35:30.036445] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.464 [2024-04-24 21:35:30.036459] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.464 [2024-04-24 21:35:30.036488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.464 qpair failed and we were unable to recover it. 00:21:04.464 [2024-04-24 21:35:30.046177] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.464 [2024-04-24 21:35:30.046334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.464 [2024-04-24 21:35:30.046361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.464 [2024-04-24 21:35:30.046376] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.464 [2024-04-24 21:35:30.046389] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.464 [2024-04-24 21:35:30.046418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.464 qpair failed and we were unable to recover it. 00:21:04.464 [2024-04-24 21:35:30.056236] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.464 [2024-04-24 21:35:30.056401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.464 [2024-04-24 21:35:30.056426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.464 [2024-04-24 21:35:30.056441] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.464 [2024-04-24 21:35:30.056453] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.464 [2024-04-24 21:35:30.056482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.464 qpair failed and we were unable to recover it. 00:21:04.464 [2024-04-24 21:35:30.066230] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.464 [2024-04-24 21:35:30.066399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.464 [2024-04-24 21:35:30.066426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.464 [2024-04-24 21:35:30.066441] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.464 [2024-04-24 21:35:30.066455] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.465 [2024-04-24 21:35:30.066483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.465 qpair failed and we were unable to recover it. 00:21:04.465 [2024-04-24 21:35:30.076295] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.465 [2024-04-24 21:35:30.076470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.465 [2024-04-24 21:35:30.076497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.465 [2024-04-24 21:35:30.076513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.465 [2024-04-24 21:35:30.076525] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.465 [2024-04-24 21:35:30.076554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.465 qpair failed and we were unable to recover it. 00:21:04.465 [2024-04-24 21:35:30.086349] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.465 [2024-04-24 21:35:30.086511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.465 [2024-04-24 21:35:30.086537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.465 [2024-04-24 21:35:30.086553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.465 [2024-04-24 21:35:30.086566] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.465 [2024-04-24 21:35:30.086595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.465 qpair failed and we were unable to recover it. 00:21:04.465 [2024-04-24 21:35:30.096370] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.465 [2024-04-24 21:35:30.096539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.465 [2024-04-24 21:35:30.096565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.465 [2024-04-24 21:35:30.096581] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.465 [2024-04-24 21:35:30.096593] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.465 [2024-04-24 21:35:30.096622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.465 qpair failed and we were unable to recover it. 00:21:04.465 [2024-04-24 21:35:30.106379] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.465 [2024-04-24 21:35:30.106548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.465 [2024-04-24 21:35:30.106573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.465 [2024-04-24 21:35:30.106587] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.465 [2024-04-24 21:35:30.106600] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.465 [2024-04-24 21:35:30.106635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.465 qpair failed and we were unable to recover it. 00:21:04.465 [2024-04-24 21:35:30.116406] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.465 [2024-04-24 21:35:30.116571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.465 [2024-04-24 21:35:30.116595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.465 [2024-04-24 21:35:30.116610] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.465 [2024-04-24 21:35:30.116635] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.465 [2024-04-24 21:35:30.116673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.465 qpair failed and we were unable to recover it. 00:21:04.465 [2024-04-24 21:35:30.126413] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.465 [2024-04-24 21:35:30.126569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.465 [2024-04-24 21:35:30.126596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.465 [2024-04-24 21:35:30.126611] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.465 [2024-04-24 21:35:30.126624] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.465 [2024-04-24 21:35:30.126664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.465 qpair failed and we were unable to recover it. 00:21:04.465 [2024-04-24 21:35:30.136469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.465 [2024-04-24 21:35:30.136649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.465 [2024-04-24 21:35:30.136679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.465 [2024-04-24 21:35:30.136694] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.465 [2024-04-24 21:35:30.136706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.465 [2024-04-24 21:35:30.136734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.465 qpair failed and we were unable to recover it. 00:21:04.726 [2024-04-24 21:35:30.146510] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.726 [2024-04-24 21:35:30.146706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.726 [2024-04-24 21:35:30.146732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.726 [2024-04-24 21:35:30.146748] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.726 [2024-04-24 21:35:30.146761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.726 [2024-04-24 21:35:30.146790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.726 qpair failed and we were unable to recover it. 00:21:04.726 [2024-04-24 21:35:30.156529] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.726 [2024-04-24 21:35:30.156706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.726 [2024-04-24 21:35:30.156733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.726 [2024-04-24 21:35:30.156748] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.726 [2024-04-24 21:35:30.156762] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.726 [2024-04-24 21:35:30.156791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.726 qpair failed and we were unable to recover it. 00:21:04.726 [2024-04-24 21:35:30.166557] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.726 [2024-04-24 21:35:30.166764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.726 [2024-04-24 21:35:30.166791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.726 [2024-04-24 21:35:30.166806] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.726 [2024-04-24 21:35:30.166819] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.726 [2024-04-24 21:35:30.166847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.726 qpair failed and we were unable to recover it. 00:21:04.726 [2024-04-24 21:35:30.176556] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.726 [2024-04-24 21:35:30.176723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.726 [2024-04-24 21:35:30.176749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.726 [2024-04-24 21:35:30.176765] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.726 [2024-04-24 21:35:30.176778] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.726 [2024-04-24 21:35:30.176806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.726 qpair failed and we were unable to recover it. 00:21:04.726 [2024-04-24 21:35:30.186594] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.726 [2024-04-24 21:35:30.186806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.726 [2024-04-24 21:35:30.186833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.726 [2024-04-24 21:35:30.186848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.726 [2024-04-24 21:35:30.186861] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.726 [2024-04-24 21:35:30.186890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.726 qpair failed and we were unable to recover it. 00:21:04.726 [2024-04-24 21:35:30.196646] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.726 [2024-04-24 21:35:30.196856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.726 [2024-04-24 21:35:30.196880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.726 [2024-04-24 21:35:30.196895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.726 [2024-04-24 21:35:30.196908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.726 [2024-04-24 21:35:30.196936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.726 qpair failed and we were unable to recover it. 00:21:04.726 [2024-04-24 21:35:30.206652] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.726 [2024-04-24 21:35:30.206860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.726 [2024-04-24 21:35:30.206886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.726 [2024-04-24 21:35:30.206905] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.726 [2024-04-24 21:35:30.206924] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.726 [2024-04-24 21:35:30.206952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.726 qpair failed and we were unable to recover it. 00:21:04.726 [2024-04-24 21:35:30.216669] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.726 [2024-04-24 21:35:30.216836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.726 [2024-04-24 21:35:30.216864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.726 [2024-04-24 21:35:30.216880] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.726 [2024-04-24 21:35:30.216894] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.726 [2024-04-24 21:35:30.216922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.726 qpair failed and we were unable to recover it. 00:21:04.726 [2024-04-24 21:35:30.226728] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.726 [2024-04-24 21:35:30.226891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.726 [2024-04-24 21:35:30.226917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.726 [2024-04-24 21:35:30.226931] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.726 [2024-04-24 21:35:30.226944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.726 [2024-04-24 21:35:30.226972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.726 qpair failed and we were unable to recover it. 00:21:04.726 [2024-04-24 21:35:30.236760] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.727 [2024-04-24 21:35:30.236922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.727 [2024-04-24 21:35:30.236946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.727 [2024-04-24 21:35:30.236961] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.727 [2024-04-24 21:35:30.236974] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.727 [2024-04-24 21:35:30.237003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.727 qpair failed and we were unable to recover it. 00:21:04.727 [2024-04-24 21:35:30.246799] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.727 [2024-04-24 21:35:30.246966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.727 [2024-04-24 21:35:30.246992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.727 [2024-04-24 21:35:30.247011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.727 [2024-04-24 21:35:30.247038] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.727 [2024-04-24 21:35:30.247067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.727 qpair failed and we were unable to recover it. 00:21:04.727 [2024-04-24 21:35:30.256781] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.727 [2024-04-24 21:35:30.256943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.727 [2024-04-24 21:35:30.256969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.727 [2024-04-24 21:35:30.256984] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.727 [2024-04-24 21:35:30.256997] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.727 [2024-04-24 21:35:30.257026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.727 qpair failed and we were unable to recover it. 00:21:04.727 [2024-04-24 21:35:30.266826] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.727 [2024-04-24 21:35:30.267008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.727 [2024-04-24 21:35:30.267032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.727 [2024-04-24 21:35:30.267047] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.727 [2024-04-24 21:35:30.267061] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.727 [2024-04-24 21:35:30.267088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.727 qpair failed and we were unable to recover it. 00:21:04.727 [2024-04-24 21:35:30.276862] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.727 [2024-04-24 21:35:30.277024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.727 [2024-04-24 21:35:30.277049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.727 [2024-04-24 21:35:30.277064] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.727 [2024-04-24 21:35:30.277077] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.727 [2024-04-24 21:35:30.277105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.727 qpair failed and we were unable to recover it. 00:21:04.727 [2024-04-24 21:35:30.286883] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.727 [2024-04-24 21:35:30.287088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.727 [2024-04-24 21:35:30.287113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.727 [2024-04-24 21:35:30.287128] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.727 [2024-04-24 21:35:30.287142] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.727 [2024-04-24 21:35:30.287170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.727 qpair failed and we were unable to recover it. 00:21:04.727 [2024-04-24 21:35:30.297018] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.727 [2024-04-24 21:35:30.297197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.727 [2024-04-24 21:35:30.297221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.727 [2024-04-24 21:35:30.297242] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.727 [2024-04-24 21:35:30.297257] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.727 [2024-04-24 21:35:30.297285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.727 qpair failed and we were unable to recover it. 00:21:04.727 [2024-04-24 21:35:30.306931] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.727 [2024-04-24 21:35:30.307084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.727 [2024-04-24 21:35:30.307109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.727 [2024-04-24 21:35:30.307124] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.727 [2024-04-24 21:35:30.307137] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.727 [2024-04-24 21:35:30.307165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.727 qpair failed and we were unable to recover it. 00:21:04.727 [2024-04-24 21:35:30.316960] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.727 [2024-04-24 21:35:30.317125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.727 [2024-04-24 21:35:30.317150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.727 [2024-04-24 21:35:30.317165] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.727 [2024-04-24 21:35:30.317178] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.727 [2024-04-24 21:35:30.317205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.727 qpair failed and we were unable to recover it. 00:21:04.727 [2024-04-24 21:35:30.327017] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.727 [2024-04-24 21:35:30.327184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.727 [2024-04-24 21:35:30.327209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.727 [2024-04-24 21:35:30.327224] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.727 [2024-04-24 21:35:30.327237] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.727 [2024-04-24 21:35:30.327267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.727 qpair failed and we were unable to recover it. 00:21:04.727 [2024-04-24 21:35:30.337041] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.727 [2024-04-24 21:35:30.337212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.727 [2024-04-24 21:35:30.337237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.727 [2024-04-24 21:35:30.337252] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.727 [2024-04-24 21:35:30.337264] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.727 [2024-04-24 21:35:30.337292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.727 qpair failed and we were unable to recover it. 00:21:04.727 [2024-04-24 21:35:30.347054] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.727 [2024-04-24 21:35:30.347214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.727 [2024-04-24 21:35:30.347239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.727 [2024-04-24 21:35:30.347255] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.727 [2024-04-24 21:35:30.347267] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.727 [2024-04-24 21:35:30.347295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.727 qpair failed and we were unable to recover it. 00:21:04.727 [2024-04-24 21:35:30.357171] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.727 [2024-04-24 21:35:30.357369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.727 [2024-04-24 21:35:30.357394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.727 [2024-04-24 21:35:30.357409] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.727 [2024-04-24 21:35:30.357423] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.727 [2024-04-24 21:35:30.357452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.727 qpair failed and we were unable to recover it. 00:21:04.727 [2024-04-24 21:35:30.367107] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.727 [2024-04-24 21:35:30.367318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.727 [2024-04-24 21:35:30.367344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.727 [2024-04-24 21:35:30.367360] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.728 [2024-04-24 21:35:30.367373] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.728 [2024-04-24 21:35:30.367401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.728 qpair failed and we were unable to recover it. 00:21:04.728 [2024-04-24 21:35:30.377122] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.728 [2024-04-24 21:35:30.377273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.728 [2024-04-24 21:35:30.377298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.728 [2024-04-24 21:35:30.377313] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.728 [2024-04-24 21:35:30.377325] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.728 [2024-04-24 21:35:30.377353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.728 qpair failed and we were unable to recover it. 00:21:04.728 [2024-04-24 21:35:30.387160] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.728 [2024-04-24 21:35:30.387330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.728 [2024-04-24 21:35:30.387355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.728 [2024-04-24 21:35:30.387375] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.728 [2024-04-24 21:35:30.387390] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.728 [2024-04-24 21:35:30.387418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.728 qpair failed and we were unable to recover it. 00:21:04.728 [2024-04-24 21:35:30.397283] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.728 [2024-04-24 21:35:30.397444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.728 [2024-04-24 21:35:30.397469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.728 [2024-04-24 21:35:30.397483] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.728 [2024-04-24 21:35:30.397496] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.728 [2024-04-24 21:35:30.397524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.728 qpair failed and we were unable to recover it. 00:21:04.988 [2024-04-24 21:35:30.407333] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.988 [2024-04-24 21:35:30.407531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.988 [2024-04-24 21:35:30.407574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.988 [2024-04-24 21:35:30.407590] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.988 [2024-04-24 21:35:30.407603] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.988 [2024-04-24 21:35:30.407652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.988 qpair failed and we were unable to recover it. 00:21:04.988 [2024-04-24 21:35:30.417308] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.988 [2024-04-24 21:35:30.417509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.988 [2024-04-24 21:35:30.417534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.988 [2024-04-24 21:35:30.417550] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.988 [2024-04-24 21:35:30.417563] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.988 [2024-04-24 21:35:30.417590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.988 qpair failed and we were unable to recover it. 00:21:04.988 [2024-04-24 21:35:30.427295] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.988 [2024-04-24 21:35:30.427455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.988 [2024-04-24 21:35:30.427481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.988 [2024-04-24 21:35:30.427496] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.988 [2024-04-24 21:35:30.427508] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.988 [2024-04-24 21:35:30.427536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.988 qpair failed and we were unable to recover it. 00:21:04.988 [2024-04-24 21:35:30.437368] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.988 [2024-04-24 21:35:30.437560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.988 [2024-04-24 21:35:30.437585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.988 [2024-04-24 21:35:30.437600] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.988 [2024-04-24 21:35:30.437614] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.988 [2024-04-24 21:35:30.437650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.988 qpair failed and we were unable to recover it. 00:21:04.988 [2024-04-24 21:35:30.447467] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.988 [2024-04-24 21:35:30.447643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.988 [2024-04-24 21:35:30.447669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.988 [2024-04-24 21:35:30.447684] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.988 [2024-04-24 21:35:30.447697] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.988 [2024-04-24 21:35:30.447727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.988 qpair failed and we were unable to recover it. 00:21:04.988 [2024-04-24 21:35:30.457371] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.988 [2024-04-24 21:35:30.457536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.988 [2024-04-24 21:35:30.457561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.988 [2024-04-24 21:35:30.457576] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.988 [2024-04-24 21:35:30.457589] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.988 [2024-04-24 21:35:30.457618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.988 qpair failed and we were unable to recover it. 00:21:04.988 [2024-04-24 21:35:30.467412] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.988 [2024-04-24 21:35:30.467577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.988 [2024-04-24 21:35:30.467602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.988 [2024-04-24 21:35:30.467616] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.988 [2024-04-24 21:35:30.467635] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.988 [2024-04-24 21:35:30.467675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.988 qpair failed and we were unable to recover it. 00:21:04.988 [2024-04-24 21:35:30.477474] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.988 [2024-04-24 21:35:30.477661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.988 [2024-04-24 21:35:30.477686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.988 [2024-04-24 21:35:30.477710] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.988 [2024-04-24 21:35:30.477724] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.988 [2024-04-24 21:35:30.477754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.988 qpair failed and we were unable to recover it. 00:21:04.988 [2024-04-24 21:35:30.487473] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.988 [2024-04-24 21:35:30.487655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.988 [2024-04-24 21:35:30.487680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.988 [2024-04-24 21:35:30.487695] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.988 [2024-04-24 21:35:30.487708] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.988 [2024-04-24 21:35:30.487736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.988 qpair failed and we were unable to recover it. 00:21:04.988 [2024-04-24 21:35:30.497500] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.988 [2024-04-24 21:35:30.497664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.988 [2024-04-24 21:35:30.497689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.988 [2024-04-24 21:35:30.497704] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.988 [2024-04-24 21:35:30.497716] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.988 [2024-04-24 21:35:30.497744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.988 qpair failed and we were unable to recover it. 00:21:04.988 [2024-04-24 21:35:30.507503] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.988 [2024-04-24 21:35:30.507666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.988 [2024-04-24 21:35:30.507691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.988 [2024-04-24 21:35:30.507706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.988 [2024-04-24 21:35:30.507720] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.988 [2024-04-24 21:35:30.507748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.988 qpair failed and we were unable to recover it. 00:21:04.988 [2024-04-24 21:35:30.517588] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.988 [2024-04-24 21:35:30.517763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.988 [2024-04-24 21:35:30.517789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.988 [2024-04-24 21:35:30.517804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.988 [2024-04-24 21:35:30.517816] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.988 [2024-04-24 21:35:30.517845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.988 qpair failed and we were unable to recover it. 00:21:04.988 [2024-04-24 21:35:30.527586] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.989 [2024-04-24 21:35:30.527791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.989 [2024-04-24 21:35:30.527819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.989 [2024-04-24 21:35:30.527834] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.989 [2024-04-24 21:35:30.527848] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.989 [2024-04-24 21:35:30.527877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.989 qpair failed and we were unable to recover it. 00:21:04.989 [2024-04-24 21:35:30.537634] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.989 [2024-04-24 21:35:30.537821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.989 [2024-04-24 21:35:30.537846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.989 [2024-04-24 21:35:30.537861] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.989 [2024-04-24 21:35:30.537875] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.989 [2024-04-24 21:35:30.537903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.989 qpair failed and we were unable to recover it. 00:21:04.989 [2024-04-24 21:35:30.547657] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.989 [2024-04-24 21:35:30.547815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.989 [2024-04-24 21:35:30.547840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.989 [2024-04-24 21:35:30.547855] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.989 [2024-04-24 21:35:30.547867] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.989 [2024-04-24 21:35:30.547894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.989 qpair failed and we were unable to recover it. 00:21:04.989 [2024-04-24 21:35:30.557705] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.989 [2024-04-24 21:35:30.557901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.989 [2024-04-24 21:35:30.557926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.989 [2024-04-24 21:35:30.557941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.989 [2024-04-24 21:35:30.557955] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.989 [2024-04-24 21:35:30.557982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.989 qpair failed and we were unable to recover it. 00:21:04.989 [2024-04-24 21:35:30.567692] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.989 [2024-04-24 21:35:30.567854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.989 [2024-04-24 21:35:30.567885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.989 [2024-04-24 21:35:30.567901] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.989 [2024-04-24 21:35:30.567914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.989 [2024-04-24 21:35:30.567943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.989 qpair failed and we were unable to recover it. 00:21:04.989 [2024-04-24 21:35:30.577728] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.989 [2024-04-24 21:35:30.577896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.989 [2024-04-24 21:35:30.577921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.989 [2024-04-24 21:35:30.577937] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.989 [2024-04-24 21:35:30.577950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.989 [2024-04-24 21:35:30.577977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.989 qpair failed and we were unable to recover it. 00:21:04.989 [2024-04-24 21:35:30.587743] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.989 [2024-04-24 21:35:30.587901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.989 [2024-04-24 21:35:30.587926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.989 [2024-04-24 21:35:30.587941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.989 [2024-04-24 21:35:30.587954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.989 [2024-04-24 21:35:30.587982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.989 qpair failed and we were unable to recover it. 00:21:04.989 [2024-04-24 21:35:30.597910] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.989 [2024-04-24 21:35:30.598088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.989 [2024-04-24 21:35:30.598113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.989 [2024-04-24 21:35:30.598128] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.989 [2024-04-24 21:35:30.598141] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.989 [2024-04-24 21:35:30.598169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.989 qpair failed and we were unable to recover it. 00:21:04.989 [2024-04-24 21:35:30.607821] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.989 [2024-04-24 21:35:30.607984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.989 [2024-04-24 21:35:30.608010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.989 [2024-04-24 21:35:30.608025] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.989 [2024-04-24 21:35:30.608038] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.989 [2024-04-24 21:35:30.608087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.989 qpair failed and we were unable to recover it. 00:21:04.989 [2024-04-24 21:35:30.617841] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.989 [2024-04-24 21:35:30.617997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.989 [2024-04-24 21:35:30.618022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.989 [2024-04-24 21:35:30.618037] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.989 [2024-04-24 21:35:30.618049] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.989 [2024-04-24 21:35:30.618076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.989 qpair failed and we were unable to recover it. 00:21:04.989 [2024-04-24 21:35:30.627864] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.989 [2024-04-24 21:35:30.628029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.989 [2024-04-24 21:35:30.628054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.989 [2024-04-24 21:35:30.628068] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.989 [2024-04-24 21:35:30.628081] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.989 [2024-04-24 21:35:30.628109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.989 qpair failed and we were unable to recover it. 00:21:04.989 [2024-04-24 21:35:30.637928] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.989 [2024-04-24 21:35:30.638099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.989 [2024-04-24 21:35:30.638125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.989 [2024-04-24 21:35:30.638145] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.989 [2024-04-24 21:35:30.638158] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.989 [2024-04-24 21:35:30.638187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.989 qpair failed and we were unable to recover it. 00:21:04.989 [2024-04-24 21:35:30.647952] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.989 [2024-04-24 21:35:30.648143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.989 [2024-04-24 21:35:30.648168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.989 [2024-04-24 21:35:30.648197] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.989 [2024-04-24 21:35:30.648210] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.989 [2024-04-24 21:35:30.648238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.989 qpair failed and we were unable to recover it. 00:21:04.989 [2024-04-24 21:35:30.657986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.989 [2024-04-24 21:35:30.658148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.989 [2024-04-24 21:35:30.658179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.989 [2024-04-24 21:35:30.658194] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.989 [2024-04-24 21:35:30.658207] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:04.989 [2024-04-24 21:35:30.658236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:04.989 qpair failed and we were unable to recover it. 00:21:05.249 [2024-04-24 21:35:30.668069] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.249 [2024-04-24 21:35:30.668236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.249 [2024-04-24 21:35:30.668262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.249 [2024-04-24 21:35:30.668278] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.249 [2024-04-24 21:35:30.668290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.249 [2024-04-24 21:35:30.668318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.249 qpair failed and we were unable to recover it. 00:21:05.249 [2024-04-24 21:35:30.678117] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.249 [2024-04-24 21:35:30.678293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.249 [2024-04-24 21:35:30.678318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.249 [2024-04-24 21:35:30.678334] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.249 [2024-04-24 21:35:30.678355] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.249 [2024-04-24 21:35:30.678382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.249 qpair failed and we were unable to recover it. 00:21:05.249 [2024-04-24 21:35:30.688122] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.249 [2024-04-24 21:35:30.688323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.249 [2024-04-24 21:35:30.688349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.249 [2024-04-24 21:35:30.688364] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.249 [2024-04-24 21:35:30.688377] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.249 [2024-04-24 21:35:30.688404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.249 qpair failed and we were unable to recover it. 00:21:05.249 [2024-04-24 21:35:30.698122] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.249 [2024-04-24 21:35:30.698276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.249 [2024-04-24 21:35:30.698301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.249 [2024-04-24 21:35:30.698316] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.249 [2024-04-24 21:35:30.698329] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.249 [2024-04-24 21:35:30.698364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.249 qpair failed and we were unable to recover it. 00:21:05.249 [2024-04-24 21:35:30.708149] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.249 [2024-04-24 21:35:30.708308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.249 [2024-04-24 21:35:30.708334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.249 [2024-04-24 21:35:30.708349] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.249 [2024-04-24 21:35:30.708362] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.249 [2024-04-24 21:35:30.708390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.249 qpair failed and we were unable to recover it. 00:21:05.249 [2024-04-24 21:35:30.718217] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.249 [2024-04-24 21:35:30.718373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.249 [2024-04-24 21:35:30.718399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.249 [2024-04-24 21:35:30.718413] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.249 [2024-04-24 21:35:30.718425] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.249 [2024-04-24 21:35:30.718453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.249 qpair failed and we were unable to recover it. 00:21:05.249 [2024-04-24 21:35:30.728226] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.250 [2024-04-24 21:35:30.728385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.250 [2024-04-24 21:35:30.728411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.250 [2024-04-24 21:35:30.728427] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.250 [2024-04-24 21:35:30.728439] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.250 [2024-04-24 21:35:30.728468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.250 qpair failed and we were unable to recover it. 00:21:05.250 [2024-04-24 21:35:30.738287] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.250 [2024-04-24 21:35:30.738454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.250 [2024-04-24 21:35:30.738479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.250 [2024-04-24 21:35:30.738496] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.250 [2024-04-24 21:35:30.738512] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.250 [2024-04-24 21:35:30.738540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.250 qpair failed and we were unable to recover it. 00:21:05.250 [2024-04-24 21:35:30.748280] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.250 [2024-04-24 21:35:30.748488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.250 [2024-04-24 21:35:30.748520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.250 [2024-04-24 21:35:30.748536] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.250 [2024-04-24 21:35:30.748549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.250 [2024-04-24 21:35:30.748577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.250 qpair failed and we were unable to recover it. 00:21:05.250 [2024-04-24 21:35:30.758338] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.250 [2024-04-24 21:35:30.758507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.250 [2024-04-24 21:35:30.758533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.250 [2024-04-24 21:35:30.758548] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.250 [2024-04-24 21:35:30.758560] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.250 [2024-04-24 21:35:30.758589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.250 qpair failed and we were unable to recover it. 00:21:05.250 [2024-04-24 21:35:30.768331] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.250 [2024-04-24 21:35:30.768536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.250 [2024-04-24 21:35:30.768562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.250 [2024-04-24 21:35:30.768577] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.250 [2024-04-24 21:35:30.768590] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.250 [2024-04-24 21:35:30.768618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.250 qpair failed and we were unable to recover it. 00:21:05.250 [2024-04-24 21:35:30.778347] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.250 [2024-04-24 21:35:30.778503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.250 [2024-04-24 21:35:30.778529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.250 [2024-04-24 21:35:30.778544] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.250 [2024-04-24 21:35:30.778557] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.250 [2024-04-24 21:35:30.778584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.250 qpair failed and we were unable to recover it. 00:21:05.250 [2024-04-24 21:35:30.788384] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.250 [2024-04-24 21:35:30.788540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.250 [2024-04-24 21:35:30.788566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.250 [2024-04-24 21:35:30.788581] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.250 [2024-04-24 21:35:30.788593] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.250 [2024-04-24 21:35:30.788626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.250 qpair failed and we were unable to recover it. 00:21:05.250 [2024-04-24 21:35:30.798427] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.250 [2024-04-24 21:35:30.798588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.250 [2024-04-24 21:35:30.798614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.250 [2024-04-24 21:35:30.798636] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.250 [2024-04-24 21:35:30.798651] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.250 [2024-04-24 21:35:30.798678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.250 qpair failed and we were unable to recover it. 00:21:05.250 [2024-04-24 21:35:30.808494] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.250 [2024-04-24 21:35:30.808707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.250 [2024-04-24 21:35:30.808734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.250 [2024-04-24 21:35:30.808749] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.250 [2024-04-24 21:35:30.808762] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.250 [2024-04-24 21:35:30.808790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.250 qpair failed and we were unable to recover it. 00:21:05.250 [2024-04-24 21:35:30.818475] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.250 [2024-04-24 21:35:30.818642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.250 [2024-04-24 21:35:30.818668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.250 [2024-04-24 21:35:30.818683] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.250 [2024-04-24 21:35:30.818696] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.250 [2024-04-24 21:35:30.818724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.250 qpair failed and we were unable to recover it. 00:21:05.250 [2024-04-24 21:35:30.828488] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.250 [2024-04-24 21:35:30.828656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.250 [2024-04-24 21:35:30.828682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.250 [2024-04-24 21:35:30.828697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.250 [2024-04-24 21:35:30.828710] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.250 [2024-04-24 21:35:30.828738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.250 qpair failed and we were unable to recover it. 00:21:05.250 [2024-04-24 21:35:30.838543] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.250 [2024-04-24 21:35:30.838711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.250 [2024-04-24 21:35:30.838742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.251 [2024-04-24 21:35:30.838758] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.251 [2024-04-24 21:35:30.838772] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.251 [2024-04-24 21:35:30.838800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.251 qpair failed and we were unable to recover it. 00:21:05.251 [2024-04-24 21:35:30.848588] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.251 [2024-04-24 21:35:30.848767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.251 [2024-04-24 21:35:30.848794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.251 [2024-04-24 21:35:30.848809] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.251 [2024-04-24 21:35:30.848822] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.251 [2024-04-24 21:35:30.848850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.251 qpair failed and we were unable to recover it. 00:21:05.251 [2024-04-24 21:35:30.858604] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.251 [2024-04-24 21:35:30.858769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.251 [2024-04-24 21:35:30.858796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.251 [2024-04-24 21:35:30.858812] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.251 [2024-04-24 21:35:30.858824] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.251 [2024-04-24 21:35:30.858855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.251 qpair failed and we were unable to recover it. 00:21:05.251 [2024-04-24 21:35:30.868668] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.251 [2024-04-24 21:35:30.868825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.251 [2024-04-24 21:35:30.868852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.251 [2024-04-24 21:35:30.868867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.251 [2024-04-24 21:35:30.868880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.251 [2024-04-24 21:35:30.868907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.251 qpair failed and we were unable to recover it. 00:21:05.251 [2024-04-24 21:35:30.878676] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.251 [2024-04-24 21:35:30.878838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.251 [2024-04-24 21:35:30.878865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.251 [2024-04-24 21:35:30.878880] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.251 [2024-04-24 21:35:30.878899] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.251 [2024-04-24 21:35:30.878928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.251 qpair failed and we were unable to recover it. 00:21:05.251 [2024-04-24 21:35:30.888692] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.251 [2024-04-24 21:35:30.888890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.251 [2024-04-24 21:35:30.888918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.251 [2024-04-24 21:35:30.888938] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.251 [2024-04-24 21:35:30.888950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.251 [2024-04-24 21:35:30.888994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.251 qpair failed and we were unable to recover it. 00:21:05.251 [2024-04-24 21:35:30.898713] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.251 [2024-04-24 21:35:30.898869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.251 [2024-04-24 21:35:30.898896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.251 [2024-04-24 21:35:30.898911] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.251 [2024-04-24 21:35:30.898924] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.251 [2024-04-24 21:35:30.898952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.251 qpair failed and we were unable to recover it. 00:21:05.251 [2024-04-24 21:35:30.908751] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.251 [2024-04-24 21:35:30.908918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.251 [2024-04-24 21:35:30.908945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.251 [2024-04-24 21:35:30.908960] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.251 [2024-04-24 21:35:30.908972] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.251 [2024-04-24 21:35:30.909001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.251 qpair failed and we were unable to recover it. 00:21:05.251 [2024-04-24 21:35:30.918786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.251 [2024-04-24 21:35:30.918959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.251 [2024-04-24 21:35:30.918985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.251 [2024-04-24 21:35:30.919000] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.251 [2024-04-24 21:35:30.919013] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.251 [2024-04-24 21:35:30.919041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.251 qpair failed and we were unable to recover it. 00:21:05.510 [2024-04-24 21:35:30.928838] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.510 [2024-04-24 21:35:30.929016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.510 [2024-04-24 21:35:30.929042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.510 [2024-04-24 21:35:30.929057] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.510 [2024-04-24 21:35:30.929085] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.510 [2024-04-24 21:35:30.929114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.510 qpair failed and we were unable to recover it. 00:21:05.510 [2024-04-24 21:35:30.938817] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.510 [2024-04-24 21:35:30.938979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.510 [2024-04-24 21:35:30.939005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.510 [2024-04-24 21:35:30.939020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.510 [2024-04-24 21:35:30.939033] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.510 [2024-04-24 21:35:30.939061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.510 qpair failed and we were unable to recover it. 00:21:05.510 [2024-04-24 21:35:30.948870] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.510 [2024-04-24 21:35:30.949028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.510 [2024-04-24 21:35:30.949055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.510 [2024-04-24 21:35:30.949070] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.510 [2024-04-24 21:35:30.949083] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.510 [2024-04-24 21:35:30.949113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.510 qpair failed and we were unable to recover it. 00:21:05.510 [2024-04-24 21:35:30.958927] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.511 [2024-04-24 21:35:30.959095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.511 [2024-04-24 21:35:30.959121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.511 [2024-04-24 21:35:30.959136] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.511 [2024-04-24 21:35:30.959149] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.511 [2024-04-24 21:35:30.959177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.511 qpair failed and we were unable to recover it. 00:21:05.511 [2024-04-24 21:35:30.968944] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.511 [2024-04-24 21:35:30.969113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.511 [2024-04-24 21:35:30.969139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.511 [2024-04-24 21:35:30.969154] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.511 [2024-04-24 21:35:30.969172] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.511 [2024-04-24 21:35:30.969201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.511 qpair failed and we were unable to recover it. 00:21:05.511 [2024-04-24 21:35:30.978934] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.511 [2024-04-24 21:35:30.979091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.511 [2024-04-24 21:35:30.979117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.511 [2024-04-24 21:35:30.979133] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.511 [2024-04-24 21:35:30.979146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.511 [2024-04-24 21:35:30.979174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.511 qpair failed and we were unable to recover it. 00:21:05.511 [2024-04-24 21:35:30.988962] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.511 [2024-04-24 21:35:30.989124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.511 [2024-04-24 21:35:30.989151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.511 [2024-04-24 21:35:30.989166] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.511 [2024-04-24 21:35:30.989179] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.511 [2024-04-24 21:35:30.989207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.511 qpair failed and we were unable to recover it. 00:21:05.511 [2024-04-24 21:35:30.999006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.511 [2024-04-24 21:35:30.999170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.511 [2024-04-24 21:35:30.999196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.511 [2024-04-24 21:35:30.999211] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.511 [2024-04-24 21:35:30.999224] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.511 [2024-04-24 21:35:30.999251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.511 qpair failed and we were unable to recover it. 00:21:05.511 [2024-04-24 21:35:31.009052] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.511 [2024-04-24 21:35:31.009218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.511 [2024-04-24 21:35:31.009245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.511 [2024-04-24 21:35:31.009261] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.511 [2024-04-24 21:35:31.009273] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.511 [2024-04-24 21:35:31.009302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.511 qpair failed and we were unable to recover it. 00:21:05.511 [2024-04-24 21:35:31.019086] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.511 [2024-04-24 21:35:31.019269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.511 [2024-04-24 21:35:31.019296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.511 [2024-04-24 21:35:31.019311] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.511 [2024-04-24 21:35:31.019325] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.511 [2024-04-24 21:35:31.019353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.511 qpair failed and we were unable to recover it. 00:21:05.511 [2024-04-24 21:35:31.029069] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.511 [2024-04-24 21:35:31.029236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.511 [2024-04-24 21:35:31.029262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.511 [2024-04-24 21:35:31.029277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.511 [2024-04-24 21:35:31.029289] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.511 [2024-04-24 21:35:31.029317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.511 qpair failed and we were unable to recover it. 00:21:05.511 [2024-04-24 21:35:31.039119] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.511 [2024-04-24 21:35:31.039284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.511 [2024-04-24 21:35:31.039310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.511 [2024-04-24 21:35:31.039326] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.511 [2024-04-24 21:35:31.039338] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.511 [2024-04-24 21:35:31.039366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.511 qpair failed and we were unable to recover it. 00:21:05.511 [2024-04-24 21:35:31.049205] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.511 [2024-04-24 21:35:31.049366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.511 [2024-04-24 21:35:31.049392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.511 [2024-04-24 21:35:31.049408] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.511 [2024-04-24 21:35:31.049421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.511 [2024-04-24 21:35:31.049463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.511 qpair failed and we were unable to recover it. 00:21:05.511 [2024-04-24 21:35:31.059165] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.511 [2024-04-24 21:35:31.059330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.511 [2024-04-24 21:35:31.059356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.511 [2024-04-24 21:35:31.059377] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.511 [2024-04-24 21:35:31.059390] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.511 [2024-04-24 21:35:31.059419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.511 qpair failed and we were unable to recover it. 00:21:05.511 [2024-04-24 21:35:31.069201] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.511 [2024-04-24 21:35:31.069359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.511 [2024-04-24 21:35:31.069386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.511 [2024-04-24 21:35:31.069401] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.512 [2024-04-24 21:35:31.069414] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.512 [2024-04-24 21:35:31.069442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.512 qpair failed and we were unable to recover it. 00:21:05.512 [2024-04-24 21:35:31.079295] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.512 [2024-04-24 21:35:31.079465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.512 [2024-04-24 21:35:31.079491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.512 [2024-04-24 21:35:31.079506] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.512 [2024-04-24 21:35:31.079519] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.512 [2024-04-24 21:35:31.079548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.512 qpair failed and we were unable to recover it. 00:21:05.512 [2024-04-24 21:35:31.089311] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.512 [2024-04-24 21:35:31.089550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.512 [2024-04-24 21:35:31.089575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.512 [2024-04-24 21:35:31.089590] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.512 [2024-04-24 21:35:31.089602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.512 [2024-04-24 21:35:31.089651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.512 qpair failed and we were unable to recover it. 00:21:05.512 [2024-04-24 21:35:31.099279] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.512 [2024-04-24 21:35:31.099445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.512 [2024-04-24 21:35:31.099472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.512 [2024-04-24 21:35:31.099487] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.512 [2024-04-24 21:35:31.099499] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.512 [2024-04-24 21:35:31.099526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.512 qpair failed and we were unable to recover it. 00:21:05.512 [2024-04-24 21:35:31.109394] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.512 [2024-04-24 21:35:31.109558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.512 [2024-04-24 21:35:31.109584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.512 [2024-04-24 21:35:31.109599] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.512 [2024-04-24 21:35:31.109612] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.512 [2024-04-24 21:35:31.109645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.512 qpair failed and we were unable to recover it. 00:21:05.512 [2024-04-24 21:35:31.119359] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.512 [2024-04-24 21:35:31.119517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.512 [2024-04-24 21:35:31.119543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.512 [2024-04-24 21:35:31.119558] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.512 [2024-04-24 21:35:31.119571] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.512 [2024-04-24 21:35:31.119599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.512 qpair failed and we were unable to recover it. 00:21:05.512 [2024-04-24 21:35:31.129386] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.512 [2024-04-24 21:35:31.129556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.512 [2024-04-24 21:35:31.129583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.512 [2024-04-24 21:35:31.129598] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.512 [2024-04-24 21:35:31.129610] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.512 [2024-04-24 21:35:31.129646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.512 qpair failed and we were unable to recover it. 00:21:05.512 [2024-04-24 21:35:31.139438] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.512 [2024-04-24 21:35:31.139592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.512 [2024-04-24 21:35:31.139619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.512 [2024-04-24 21:35:31.139641] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.512 [2024-04-24 21:35:31.139655] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.512 [2024-04-24 21:35:31.139683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.512 qpair failed and we were unable to recover it. 00:21:05.512 [2024-04-24 21:35:31.149528] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.512 [2024-04-24 21:35:31.149706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.512 [2024-04-24 21:35:31.149732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.512 [2024-04-24 21:35:31.149753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.512 [2024-04-24 21:35:31.149766] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.512 [2024-04-24 21:35:31.149795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.512 qpair failed and we were unable to recover it. 00:21:05.512 [2024-04-24 21:35:31.159504] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.512 [2024-04-24 21:35:31.159724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.512 [2024-04-24 21:35:31.159750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.512 [2024-04-24 21:35:31.159765] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.512 [2024-04-24 21:35:31.159778] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.512 [2024-04-24 21:35:31.159806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.512 qpair failed and we were unable to recover it. 00:21:05.512 [2024-04-24 21:35:31.169506] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.512 [2024-04-24 21:35:31.169712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.512 [2024-04-24 21:35:31.169739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.512 [2024-04-24 21:35:31.169755] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.512 [2024-04-24 21:35:31.169768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.512 [2024-04-24 21:35:31.169796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.512 qpair failed and we were unable to recover it. 00:21:05.512 [2024-04-24 21:35:31.179578] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.512 [2024-04-24 21:35:31.179783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.512 [2024-04-24 21:35:31.179810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.512 [2024-04-24 21:35:31.179825] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.512 [2024-04-24 21:35:31.179837] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.512 [2024-04-24 21:35:31.179865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.512 qpair failed and we were unable to recover it. 00:21:05.771 [2024-04-24 21:35:31.189576] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.771 [2024-04-24 21:35:31.189745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.771 [2024-04-24 21:35:31.189772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.771 [2024-04-24 21:35:31.189787] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.771 [2024-04-24 21:35:31.189800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.771 [2024-04-24 21:35:31.189828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.771 qpair failed and we were unable to recover it. 00:21:05.771 [2024-04-24 21:35:31.199622] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.771 [2024-04-24 21:35:31.199803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.771 [2024-04-24 21:35:31.199829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.771 [2024-04-24 21:35:31.199845] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.771 [2024-04-24 21:35:31.199857] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.771 [2024-04-24 21:35:31.199885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.771 qpair failed and we were unable to recover it. 00:21:05.771 [2024-04-24 21:35:31.209633] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.771 [2024-04-24 21:35:31.209798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.771 [2024-04-24 21:35:31.209824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.771 [2024-04-24 21:35:31.209839] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.771 [2024-04-24 21:35:31.209852] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.771 [2024-04-24 21:35:31.209881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.771 qpair failed and we were unable to recover it. 00:21:05.771 [2024-04-24 21:35:31.219738] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.771 [2024-04-24 21:35:31.219920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.771 [2024-04-24 21:35:31.219946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.772 [2024-04-24 21:35:31.219961] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.772 [2024-04-24 21:35:31.219974] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.772 [2024-04-24 21:35:31.220003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.772 qpair failed and we were unable to recover it. 00:21:05.772 [2024-04-24 21:35:31.229653] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.772 [2024-04-24 21:35:31.229812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.772 [2024-04-24 21:35:31.229839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.772 [2024-04-24 21:35:31.229855] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.772 [2024-04-24 21:35:31.229867] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.772 [2024-04-24 21:35:31.229896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.772 qpair failed and we were unable to recover it. 00:21:05.772 [2024-04-24 21:35:31.239712] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.772 [2024-04-24 21:35:31.239881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.772 [2024-04-24 21:35:31.239918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.772 [2024-04-24 21:35:31.239939] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.772 [2024-04-24 21:35:31.239953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.772 [2024-04-24 21:35:31.239986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.772 qpair failed and we were unable to recover it. 00:21:05.772 [2024-04-24 21:35:31.249775] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.772 [2024-04-24 21:35:31.249989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.772 [2024-04-24 21:35:31.250016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.772 [2024-04-24 21:35:31.250036] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.772 [2024-04-24 21:35:31.250050] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.772 [2024-04-24 21:35:31.250079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.772 qpair failed and we were unable to recover it. 00:21:05.772 [2024-04-24 21:35:31.259760] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.772 [2024-04-24 21:35:31.259965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.772 [2024-04-24 21:35:31.259991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.772 [2024-04-24 21:35:31.260006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.772 [2024-04-24 21:35:31.260019] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.772 [2024-04-24 21:35:31.260047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.772 qpair failed and we were unable to recover it. 00:21:05.772 [2024-04-24 21:35:31.269809] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.772 [2024-04-24 21:35:31.270035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.772 [2024-04-24 21:35:31.270062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.772 [2024-04-24 21:35:31.270076] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.772 [2024-04-24 21:35:31.270089] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.772 [2024-04-24 21:35:31.270117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.772 qpair failed and we were unable to recover it. 00:21:05.772 [2024-04-24 21:35:31.279871] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.772 [2024-04-24 21:35:31.280090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.772 [2024-04-24 21:35:31.280116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.772 [2024-04-24 21:35:31.280131] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.772 [2024-04-24 21:35:31.280144] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.772 [2024-04-24 21:35:31.280172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.772 qpair failed and we were unable to recover it. 00:21:05.772 [2024-04-24 21:35:31.289846] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.772 [2024-04-24 21:35:31.290020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.772 [2024-04-24 21:35:31.290046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.772 [2024-04-24 21:35:31.290061] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.772 [2024-04-24 21:35:31.290074] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.772 [2024-04-24 21:35:31.290103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.772 qpair failed and we were unable to recover it. 00:21:05.772 [2024-04-24 21:35:31.299885] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.772 [2024-04-24 21:35:31.300048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.772 [2024-04-24 21:35:31.300074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.772 [2024-04-24 21:35:31.300090] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.772 [2024-04-24 21:35:31.300102] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.772 [2024-04-24 21:35:31.300130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.772 qpair failed and we were unable to recover it. 00:21:05.772 [2024-04-24 21:35:31.309918] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.772 [2024-04-24 21:35:31.310081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.772 [2024-04-24 21:35:31.310107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.772 [2024-04-24 21:35:31.310123] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.772 [2024-04-24 21:35:31.310135] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.772 [2024-04-24 21:35:31.310163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.772 qpair failed and we were unable to recover it. 00:21:05.772 [2024-04-24 21:35:31.319938] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.772 [2024-04-24 21:35:31.320098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.772 [2024-04-24 21:35:31.320123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.772 [2024-04-24 21:35:31.320138] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.772 [2024-04-24 21:35:31.320151] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.772 [2024-04-24 21:35:31.320178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.772 qpair failed and we were unable to recover it. 00:21:05.772 [2024-04-24 21:35:31.329984] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.772 [2024-04-24 21:35:31.330156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.772 [2024-04-24 21:35:31.330187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.772 [2024-04-24 21:35:31.330204] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.772 [2024-04-24 21:35:31.330216] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.772 [2024-04-24 21:35:31.330244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.772 qpair failed and we were unable to recover it. 00:21:05.772 [2024-04-24 21:35:31.340036] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.772 [2024-04-24 21:35:31.340198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.772 [2024-04-24 21:35:31.340224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.772 [2024-04-24 21:35:31.340240] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.772 [2024-04-24 21:35:31.340252] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.772 [2024-04-24 21:35:31.340280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.772 qpair failed and we were unable to recover it. 00:21:05.772 [2024-04-24 21:35:31.350017] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.772 [2024-04-24 21:35:31.350189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.772 [2024-04-24 21:35:31.350215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.772 [2024-04-24 21:35:31.350230] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.772 [2024-04-24 21:35:31.350242] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.773 [2024-04-24 21:35:31.350270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.773 qpair failed and we were unable to recover it. 00:21:05.773 [2024-04-24 21:35:31.360146] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.773 [2024-04-24 21:35:31.360309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.773 [2024-04-24 21:35:31.360334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.773 [2024-04-24 21:35:31.360349] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.773 [2024-04-24 21:35:31.360362] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.773 [2024-04-24 21:35:31.360390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.773 qpair failed and we were unable to recover it. 00:21:05.773 [2024-04-24 21:35:31.370096] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.773 [2024-04-24 21:35:31.370301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.773 [2024-04-24 21:35:31.370327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.773 [2024-04-24 21:35:31.370343] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.773 [2024-04-24 21:35:31.370355] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.773 [2024-04-24 21:35:31.370384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.773 qpair failed and we were unable to recover it. 00:21:05.773 [2024-04-24 21:35:31.380191] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.773 [2024-04-24 21:35:31.380357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.773 [2024-04-24 21:35:31.380384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.773 [2024-04-24 21:35:31.380399] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.773 [2024-04-24 21:35:31.380412] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.773 [2024-04-24 21:35:31.380440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.773 qpair failed and we were unable to recover it. 00:21:05.773 [2024-04-24 21:35:31.390170] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.773 [2024-04-24 21:35:31.390338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.773 [2024-04-24 21:35:31.390365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.773 [2024-04-24 21:35:31.390380] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.773 [2024-04-24 21:35:31.390393] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.773 [2024-04-24 21:35:31.390420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.773 qpair failed and we were unable to recover it. 00:21:05.773 [2024-04-24 21:35:31.400199] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.773 [2024-04-24 21:35:31.400369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.773 [2024-04-24 21:35:31.400395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.773 [2024-04-24 21:35:31.400411] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.773 [2024-04-24 21:35:31.400423] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.773 [2024-04-24 21:35:31.400451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.773 qpair failed and we were unable to recover it. 00:21:05.773 [2024-04-24 21:35:31.410186] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.773 [2024-04-24 21:35:31.410362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.773 [2024-04-24 21:35:31.410388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.773 [2024-04-24 21:35:31.410403] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.773 [2024-04-24 21:35:31.410415] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.773 [2024-04-24 21:35:31.410443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.773 qpair failed and we were unable to recover it. 00:21:05.773 [2024-04-24 21:35:31.420251] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.773 [2024-04-24 21:35:31.420450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.773 [2024-04-24 21:35:31.420482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.773 [2024-04-24 21:35:31.420498] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.773 [2024-04-24 21:35:31.420510] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.773 [2024-04-24 21:35:31.420538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.773 qpair failed and we were unable to recover it. 00:21:05.773 [2024-04-24 21:35:31.430230] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.773 [2024-04-24 21:35:31.430394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.773 [2024-04-24 21:35:31.430420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.773 [2024-04-24 21:35:31.430435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.773 [2024-04-24 21:35:31.430447] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.773 [2024-04-24 21:35:31.430475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.773 qpair failed and we were unable to recover it. 00:21:05.773 [2024-04-24 21:35:31.440280] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.773 [2024-04-24 21:35:31.440458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.773 [2024-04-24 21:35:31.440484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.773 [2024-04-24 21:35:31.440500] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.773 [2024-04-24 21:35:31.440513] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:05.773 [2024-04-24 21:35:31.440540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.773 qpair failed and we were unable to recover it. 00:21:06.033 [2024-04-24 21:35:31.450326] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.033 [2024-04-24 21:35:31.450497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.033 [2024-04-24 21:35:31.450523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.033 [2024-04-24 21:35:31.450539] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.033 [2024-04-24 21:35:31.450560] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.033 [2024-04-24 21:35:31.450589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.033 qpair failed and we were unable to recover it. 00:21:06.033 [2024-04-24 21:35:31.460369] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.033 [2024-04-24 21:35:31.460551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.033 [2024-04-24 21:35:31.460576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.033 [2024-04-24 21:35:31.460591] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.033 [2024-04-24 21:35:31.460603] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.033 [2024-04-24 21:35:31.460655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.033 qpair failed and we were unable to recover it. 00:21:06.033 [2024-04-24 21:35:31.470411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.033 [2024-04-24 21:35:31.470579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.033 [2024-04-24 21:35:31.470605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.033 [2024-04-24 21:35:31.470620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.033 [2024-04-24 21:35:31.470641] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.033 [2024-04-24 21:35:31.470670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.033 qpair failed and we were unable to recover it. 00:21:06.034 [2024-04-24 21:35:31.480396] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.034 [2024-04-24 21:35:31.480570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.034 [2024-04-24 21:35:31.480595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.034 [2024-04-24 21:35:31.480619] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.034 [2024-04-24 21:35:31.480641] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.034 [2024-04-24 21:35:31.480670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.034 qpair failed and we were unable to recover it. 00:21:06.034 [2024-04-24 21:35:31.490461] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.034 [2024-04-24 21:35:31.490649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.034 [2024-04-24 21:35:31.490674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.034 [2024-04-24 21:35:31.490689] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.034 [2024-04-24 21:35:31.490701] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.034 [2024-04-24 21:35:31.490731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.034 qpair failed and we were unable to recover it. 00:21:06.034 [2024-04-24 21:35:31.500473] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.034 [2024-04-24 21:35:31.500633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.034 [2024-04-24 21:35:31.500659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.034 [2024-04-24 21:35:31.500678] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.034 [2024-04-24 21:35:31.500690] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.034 [2024-04-24 21:35:31.500718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.034 qpair failed and we were unable to recover it. 00:21:06.034 [2024-04-24 21:35:31.510553] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.034 [2024-04-24 21:35:31.510722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.034 [2024-04-24 21:35:31.510753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.034 [2024-04-24 21:35:31.510769] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.034 [2024-04-24 21:35:31.510781] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.034 [2024-04-24 21:35:31.510809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.034 qpair failed and we were unable to recover it. 00:21:06.034 [2024-04-24 21:35:31.520523] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.034 [2024-04-24 21:35:31.520737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.034 [2024-04-24 21:35:31.520762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.034 [2024-04-24 21:35:31.520777] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.034 [2024-04-24 21:35:31.520789] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.034 [2024-04-24 21:35:31.520818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.034 qpair failed and we were unable to recover it. 00:21:06.034 [2024-04-24 21:35:31.530549] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.034 [2024-04-24 21:35:31.530761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.034 [2024-04-24 21:35:31.530785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.034 [2024-04-24 21:35:31.530800] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.034 [2024-04-24 21:35:31.530814] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.034 [2024-04-24 21:35:31.530842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.034 qpair failed and we were unable to recover it. 00:21:06.034 [2024-04-24 21:35:31.540600] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.034 [2024-04-24 21:35:31.540787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.034 [2024-04-24 21:35:31.540812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.034 [2024-04-24 21:35:31.540827] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.034 [2024-04-24 21:35:31.540841] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.034 [2024-04-24 21:35:31.540869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.034 qpair failed and we were unable to recover it. 00:21:06.034 [2024-04-24 21:35:31.550602] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.034 [2024-04-24 21:35:31.550835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.034 [2024-04-24 21:35:31.550862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.034 [2024-04-24 21:35:31.550877] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.034 [2024-04-24 21:35:31.550891] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.034 [2024-04-24 21:35:31.550925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.034 qpair failed and we were unable to recover it. 00:21:06.034 [2024-04-24 21:35:31.560624] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.034 [2024-04-24 21:35:31.560837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.034 [2024-04-24 21:35:31.560864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.034 [2024-04-24 21:35:31.560880] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.034 [2024-04-24 21:35:31.560893] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.034 [2024-04-24 21:35:31.560922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.034 qpair failed and we were unable to recover it. 00:21:06.034 [2024-04-24 21:35:31.570663] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.034 [2024-04-24 21:35:31.570864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.034 [2024-04-24 21:35:31.570889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.034 [2024-04-24 21:35:31.570904] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.034 [2024-04-24 21:35:31.570918] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.034 [2024-04-24 21:35:31.570949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.034 qpair failed and we were unable to recover it. 00:21:06.034 [2024-04-24 21:35:31.580682] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.034 [2024-04-24 21:35:31.580838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.034 [2024-04-24 21:35:31.580863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.034 [2024-04-24 21:35:31.580878] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.034 [2024-04-24 21:35:31.580891] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.034 [2024-04-24 21:35:31.580919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.034 qpair failed and we were unable to recover it. 00:21:06.034 [2024-04-24 21:35:31.590746] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.034 [2024-04-24 21:35:31.590950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.034 [2024-04-24 21:35:31.590976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.034 [2024-04-24 21:35:31.590995] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.034 [2024-04-24 21:35:31.591009] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.034 [2024-04-24 21:35:31.591038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.035 qpair failed and we were unable to recover it. 00:21:06.035 [2024-04-24 21:35:31.600763] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.035 [2024-04-24 21:35:31.600930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.035 [2024-04-24 21:35:31.600961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.035 [2024-04-24 21:35:31.600976] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.035 [2024-04-24 21:35:31.600989] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.035 [2024-04-24 21:35:31.601019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.035 qpair failed and we were unable to recover it. 00:21:06.035 [2024-04-24 21:35:31.610793] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.035 [2024-04-24 21:35:31.611017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.035 [2024-04-24 21:35:31.611041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.035 [2024-04-24 21:35:31.611056] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.035 [2024-04-24 21:35:31.611069] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.035 [2024-04-24 21:35:31.611096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.035 qpair failed and we were unable to recover it. 00:21:06.035 [2024-04-24 21:35:31.620823] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.035 [2024-04-24 21:35:31.620982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.035 [2024-04-24 21:35:31.621007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.035 [2024-04-24 21:35:31.621022] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.035 [2024-04-24 21:35:31.621035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.035 [2024-04-24 21:35:31.621062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.035 qpair failed and we were unable to recover it. 00:21:06.035 [2024-04-24 21:35:31.630836] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.035 [2024-04-24 21:35:31.630995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.035 [2024-04-24 21:35:31.631020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.035 [2024-04-24 21:35:31.631035] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.035 [2024-04-24 21:35:31.631048] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.035 [2024-04-24 21:35:31.631075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.035 qpair failed and we were unable to recover it. 00:21:06.035 [2024-04-24 21:35:31.640950] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.035 [2024-04-24 21:35:31.641115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.035 [2024-04-24 21:35:31.641141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.035 [2024-04-24 21:35:31.641156] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.035 [2024-04-24 21:35:31.641178] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.035 [2024-04-24 21:35:31.641209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.035 qpair failed and we were unable to recover it. 00:21:06.035 [2024-04-24 21:35:31.650898] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.035 [2024-04-24 21:35:31.651067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.035 [2024-04-24 21:35:31.651092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.035 [2024-04-24 21:35:31.651123] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.035 [2024-04-24 21:35:31.651136] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.035 [2024-04-24 21:35:31.651164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.035 qpair failed and we were unable to recover it. 00:21:06.035 [2024-04-24 21:35:31.660911] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.035 [2024-04-24 21:35:31.661073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.035 [2024-04-24 21:35:31.661098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.035 [2024-04-24 21:35:31.661113] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.035 [2024-04-24 21:35:31.661126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.035 [2024-04-24 21:35:31.661153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.035 qpair failed and we were unable to recover it. 00:21:06.035 [2024-04-24 21:35:31.670953] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.035 [2024-04-24 21:35:31.671116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.035 [2024-04-24 21:35:31.671141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.035 [2024-04-24 21:35:31.671156] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.035 [2024-04-24 21:35:31.671168] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.035 [2024-04-24 21:35:31.671197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.035 qpair failed and we were unable to recover it. 00:21:06.035 [2024-04-24 21:35:31.680980] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.035 [2024-04-24 21:35:31.681175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.035 [2024-04-24 21:35:31.681200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.035 [2024-04-24 21:35:31.681215] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.035 [2024-04-24 21:35:31.681228] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.035 [2024-04-24 21:35:31.681256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.035 qpair failed and we were unable to recover it. 00:21:06.035 [2024-04-24 21:35:31.691079] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.035 [2024-04-24 21:35:31.691247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.035 [2024-04-24 21:35:31.691272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.035 [2024-04-24 21:35:31.691287] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.035 [2024-04-24 21:35:31.691300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.035 [2024-04-24 21:35:31.691328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.035 qpair failed and we were unable to recover it. 00:21:06.035 [2024-04-24 21:35:31.701039] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.035 [2024-04-24 21:35:31.701212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.035 [2024-04-24 21:35:31.701237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.035 [2024-04-24 21:35:31.701252] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.035 [2024-04-24 21:35:31.701265] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.035 [2024-04-24 21:35:31.701294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.035 qpair failed and we were unable to recover it. 00:21:06.295 [2024-04-24 21:35:31.711121] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.295 [2024-04-24 21:35:31.711299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.295 [2024-04-24 21:35:31.711324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.295 [2024-04-24 21:35:31.711338] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.295 [2024-04-24 21:35:31.711352] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.295 [2024-04-24 21:35:31.711380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.295 qpair failed and we were unable to recover it. 00:21:06.295 [2024-04-24 21:35:31.721088] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.295 [2024-04-24 21:35:31.721265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.295 [2024-04-24 21:35:31.721290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.296 [2024-04-24 21:35:31.721305] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.296 [2024-04-24 21:35:31.721317] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.296 [2024-04-24 21:35:31.721346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.296 qpair failed and we were unable to recover it. 00:21:06.296 [2024-04-24 21:35:31.731092] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.296 [2024-04-24 21:35:31.731249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.296 [2024-04-24 21:35:31.731274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.296 [2024-04-24 21:35:31.731288] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.296 [2024-04-24 21:35:31.731307] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.296 [2024-04-24 21:35:31.731336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.296 qpair failed and we were unable to recover it. 00:21:06.296 [2024-04-24 21:35:31.741165] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.296 [2024-04-24 21:35:31.741368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.296 [2024-04-24 21:35:31.741394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.296 [2024-04-24 21:35:31.741410] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.296 [2024-04-24 21:35:31.741423] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.296 [2024-04-24 21:35:31.741451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.296 qpair failed and we were unable to recover it. 00:21:06.296 [2024-04-24 21:35:31.751210] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.296 [2024-04-24 21:35:31.751373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.296 [2024-04-24 21:35:31.751398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.296 [2024-04-24 21:35:31.751412] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.296 [2024-04-24 21:35:31.751425] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.296 [2024-04-24 21:35:31.751454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.296 qpair failed and we were unable to recover it. 00:21:06.296 [2024-04-24 21:35:31.761187] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.296 [2024-04-24 21:35:31.761349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.296 [2024-04-24 21:35:31.761374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.296 [2024-04-24 21:35:31.761389] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.296 [2024-04-24 21:35:31.761402] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.296 [2024-04-24 21:35:31.761430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.296 qpair failed and we were unable to recover it. 00:21:06.296 [2024-04-24 21:35:31.771229] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.296 [2024-04-24 21:35:31.771422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.296 [2024-04-24 21:35:31.771447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.296 [2024-04-24 21:35:31.771462] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.296 [2024-04-24 21:35:31.771475] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.296 [2024-04-24 21:35:31.771504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.296 qpair failed and we were unable to recover it. 00:21:06.296 [2024-04-24 21:35:31.781278] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.296 [2024-04-24 21:35:31.781476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.296 [2024-04-24 21:35:31.781501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.296 [2024-04-24 21:35:31.781517] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.296 [2024-04-24 21:35:31.781530] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.296 [2024-04-24 21:35:31.781558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.296 qpair failed and we were unable to recover it. 00:21:06.296 [2024-04-24 21:35:31.791266] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.296 [2024-04-24 21:35:31.791426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.296 [2024-04-24 21:35:31.791451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.296 [2024-04-24 21:35:31.791466] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.296 [2024-04-24 21:35:31.791479] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.296 [2024-04-24 21:35:31.791506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.296 qpair failed and we were unable to recover it. 00:21:06.296 [2024-04-24 21:35:31.801339] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.296 [2024-04-24 21:35:31.801518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.296 [2024-04-24 21:35:31.801542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.296 [2024-04-24 21:35:31.801557] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.296 [2024-04-24 21:35:31.801569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.296 [2024-04-24 21:35:31.801598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.296 qpair failed and we were unable to recover it. 00:21:06.296 [2024-04-24 21:35:31.811342] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.296 [2024-04-24 21:35:31.811505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.296 [2024-04-24 21:35:31.811529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.296 [2024-04-24 21:35:31.811544] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.296 [2024-04-24 21:35:31.811557] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.296 [2024-04-24 21:35:31.811585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.296 qpair failed and we were unable to recover it. 00:21:06.296 [2024-04-24 21:35:31.821363] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.296 [2024-04-24 21:35:31.821522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.296 [2024-04-24 21:35:31.821548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.296 [2024-04-24 21:35:31.821562] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.296 [2024-04-24 21:35:31.821581] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.296 [2024-04-24 21:35:31.821610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.296 qpair failed and we were unable to recover it. 00:21:06.296 [2024-04-24 21:35:31.831418] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.296 [2024-04-24 21:35:31.831595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.296 [2024-04-24 21:35:31.831620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.296 [2024-04-24 21:35:31.831643] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.296 [2024-04-24 21:35:31.831657] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.296 [2024-04-24 21:35:31.831685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.296 qpair failed and we were unable to recover it. 00:21:06.296 [2024-04-24 21:35:31.841416] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.296 [2024-04-24 21:35:31.841578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.296 [2024-04-24 21:35:31.841603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.296 [2024-04-24 21:35:31.841617] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.296 [2024-04-24 21:35:31.841637] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.296 [2024-04-24 21:35:31.841666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.296 qpair failed and we were unable to recover it. 00:21:06.296 [2024-04-24 21:35:31.851473] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.296 [2024-04-24 21:35:31.851652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.296 [2024-04-24 21:35:31.851678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.296 [2024-04-24 21:35:31.851692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.296 [2024-04-24 21:35:31.851705] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.297 [2024-04-24 21:35:31.851733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.297 qpair failed and we were unable to recover it. 00:21:06.297 [2024-04-24 21:35:31.861496] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.297 [2024-04-24 21:35:31.861675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.297 [2024-04-24 21:35:31.861701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.297 [2024-04-24 21:35:31.861716] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.297 [2024-04-24 21:35:31.861728] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.297 [2024-04-24 21:35:31.861756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.297 qpair failed and we were unable to recover it. 00:21:06.297 [2024-04-24 21:35:31.871493] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.297 [2024-04-24 21:35:31.871666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.297 [2024-04-24 21:35:31.871692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.297 [2024-04-24 21:35:31.871706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.297 [2024-04-24 21:35:31.871719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.297 [2024-04-24 21:35:31.871747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.297 qpair failed and we were unable to recover it. 00:21:06.297 [2024-04-24 21:35:31.881558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.297 [2024-04-24 21:35:31.881727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.297 [2024-04-24 21:35:31.881752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.297 [2024-04-24 21:35:31.881767] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.297 [2024-04-24 21:35:31.881780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.297 [2024-04-24 21:35:31.881808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.297 qpair failed and we were unable to recover it. 00:21:06.297 [2024-04-24 21:35:31.891554] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.297 [2024-04-24 21:35:31.891713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.297 [2024-04-24 21:35:31.891738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.297 [2024-04-24 21:35:31.891753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.297 [2024-04-24 21:35:31.891766] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.297 [2024-04-24 21:35:31.891796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.297 qpair failed and we were unable to recover it. 00:21:06.297 [2024-04-24 21:35:31.901568] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.297 [2024-04-24 21:35:31.901730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.297 [2024-04-24 21:35:31.901755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.297 [2024-04-24 21:35:31.901770] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.297 [2024-04-24 21:35:31.901783] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.297 [2024-04-24 21:35:31.901810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.297 qpair failed and we were unable to recover it. 00:21:06.297 [2024-04-24 21:35:31.911614] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.297 [2024-04-24 21:35:31.911785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.297 [2024-04-24 21:35:31.911811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.297 [2024-04-24 21:35:31.911831] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.297 [2024-04-24 21:35:31.911844] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.297 [2024-04-24 21:35:31.911873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.297 qpair failed and we were unable to recover it. 00:21:06.297 [2024-04-24 21:35:31.921679] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.297 [2024-04-24 21:35:31.921861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.297 [2024-04-24 21:35:31.921886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.297 [2024-04-24 21:35:31.921901] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.297 [2024-04-24 21:35:31.921913] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.297 [2024-04-24 21:35:31.921942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.297 qpair failed and we were unable to recover it. 00:21:06.297 [2024-04-24 21:35:31.931667] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.297 [2024-04-24 21:35:31.931829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.297 [2024-04-24 21:35:31.931855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.297 [2024-04-24 21:35:31.931870] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.297 [2024-04-24 21:35:31.931882] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.297 [2024-04-24 21:35:31.931911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.297 qpair failed and we were unable to recover it. 00:21:06.297 [2024-04-24 21:35:31.941704] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.297 [2024-04-24 21:35:31.941870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.297 [2024-04-24 21:35:31.941895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.297 [2024-04-24 21:35:31.941909] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.297 [2024-04-24 21:35:31.941922] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.297 [2024-04-24 21:35:31.941949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.297 qpair failed and we were unable to recover it. 00:21:06.297 [2024-04-24 21:35:31.951754] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.297 [2024-04-24 21:35:31.951966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.297 [2024-04-24 21:35:31.951993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.297 [2024-04-24 21:35:31.952008] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.297 [2024-04-24 21:35:31.952021] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.297 [2024-04-24 21:35:31.952049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.297 qpair failed and we were unable to recover it. 00:21:06.297 [2024-04-24 21:35:31.961785] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.297 [2024-04-24 21:35:31.961955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.297 [2024-04-24 21:35:31.961980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.297 [2024-04-24 21:35:31.961995] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.297 [2024-04-24 21:35:31.962008] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1beef30 00:21:06.297 [2024-04-24 21:35:31.962036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.297 qpair failed and we were unable to recover it. 00:21:06.557 [2024-04-24 21:35:31.971807] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.557 [2024-04-24 21:35:31.971986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.557 [2024-04-24 21:35:31.972022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.557 [2024-04-24 21:35:31.972039] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.557 [2024-04-24 21:35:31.972053] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68dc000b90 00:21:06.557 [2024-04-24 21:35:31.972084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:06.557 qpair failed and we were unable to recover it. 00:21:06.557 [2024-04-24 21:35:31.981882] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.557 [2024-04-24 21:35:31.982050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.557 [2024-04-24 21:35:31.982077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.557 [2024-04-24 21:35:31.982096] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.557 [2024-04-24 21:35:31.982109] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68dc000b90 00:21:06.557 [2024-04-24 21:35:31.982139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:06.557 qpair failed and we were unable to recover it. 00:21:06.557 [2024-04-24 21:35:31.991897] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.557 [2024-04-24 21:35:31.992086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.557 [2024-04-24 21:35:31.992117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.557 [2024-04-24 21:35:31.992133] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.557 [2024-04-24 21:35:31.992147] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68e4000b90 00:21:06.557 [2024-04-24 21:35:31.992178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:06.557 qpair failed and we were unable to recover it. 00:21:06.557 [2024-04-24 21:35:32.001960] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.557 [2024-04-24 21:35:32.002123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.557 [2024-04-24 21:35:32.002149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.557 [2024-04-24 21:35:32.002169] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.557 [2024-04-24 21:35:32.002183] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68e4000b90 00:21:06.557 [2024-04-24 21:35:32.002213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:06.557 qpair failed and we were unable to recover it. 00:21:06.557 [2024-04-24 21:35:32.002344] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:21:06.557 A controller has encountered a failure and is being reset. 00:21:06.557 [2024-04-24 21:35:32.011921] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.557 [2024-04-24 21:35:32.012088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.557 [2024-04-24 21:35:32.012119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.557 [2024-04-24 21:35:32.012134] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.558 [2024-04-24 21:35:32.012147] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68d4000b90 00:21:06.558 [2024-04-24 21:35:32.012179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.558 qpair failed and we were unable to recover it. 00:21:06.558 [2024-04-24 21:35:32.021945] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.558 [2024-04-24 21:35:32.022111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.558 [2024-04-24 21:35:32.022138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.558 [2024-04-24 21:35:32.022153] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.558 [2024-04-24 21:35:32.022165] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68d4000b90 00:21:06.558 [2024-04-24 21:35:32.022195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.558 qpair failed and we were unable to recover it. 00:21:06.558 [2024-04-24 21:35:32.022304] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfc860 (9): Bad file descriptor 00:21:06.558 Controller properly reset. 00:21:06.558 Initializing NVMe Controllers 00:21:06.558 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:06.558 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:06.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:21:06.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:21:06.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:21:06.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:21:06.558 Initialization complete. Launching workers. 00:21:06.558 Starting thread on core 1 00:21:06.558 Starting thread on core 2 00:21:06.558 Starting thread on core 3 00:21:06.558 Starting thread on core 0 00:21:06.558 21:35:32 -- host/target_disconnect.sh@59 -- # sync 00:21:06.558 00:21:06.558 real 0m10.889s 00:21:06.558 user 0m17.715s 00:21:06.558 sys 0m5.512s 00:21:06.558 21:35:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:06.558 21:35:32 -- common/autotest_common.sh@10 -- # set +x 00:21:06.558 ************************************ 00:21:06.558 END TEST nvmf_target_disconnect_tc2 00:21:06.558 ************************************ 00:21:06.558 21:35:32 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:21:06.558 21:35:32 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:06.558 21:35:32 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:21:06.558 21:35:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:06.558 21:35:32 -- nvmf/common.sh@117 -- # sync 00:21:06.558 21:35:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:06.558 21:35:32 -- nvmf/common.sh@120 -- # set +e 00:21:06.558 21:35:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:06.558 21:35:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:06.558 rmmod nvme_tcp 00:21:06.558 rmmod nvme_fabrics 00:21:06.558 rmmod nvme_keyring 00:21:06.558 21:35:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:06.558 21:35:32 -- nvmf/common.sh@124 -- # set -e 00:21:06.558 21:35:32 -- nvmf/common.sh@125 -- # return 0 00:21:06.558 21:35:32 -- nvmf/common.sh@478 -- # '[' -n 2681809 ']' 00:21:06.558 21:35:32 -- nvmf/common.sh@479 -- # killprocess 2681809 00:21:06.558 21:35:32 -- common/autotest_common.sh@936 -- # '[' -z 2681809 ']' 00:21:06.558 21:35:32 -- common/autotest_common.sh@940 -- # kill -0 2681809 00:21:06.558 21:35:32 -- common/autotest_common.sh@941 -- # uname 00:21:06.558 21:35:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:06.558 21:35:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2681809 00:21:06.558 21:35:32 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:21:06.558 21:35:32 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:21:06.558 21:35:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2681809' 00:21:06.558 killing process with pid 2681809 00:21:06.558 21:35:32 -- common/autotest_common.sh@955 -- # kill 2681809 00:21:06.558 21:35:32 -- common/autotest_common.sh@960 -- # wait 2681809 00:21:06.816 21:35:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:06.816 21:35:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:07.074 21:35:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:07.074 21:35:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:07.074 21:35:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:07.074 21:35:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.074 21:35:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.074 21:35:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.978 21:35:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:08.978 00:21:08.978 real 0m15.761s 00:21:08.978 user 0m44.127s 00:21:08.978 sys 0m7.520s 00:21:08.978 21:35:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:08.978 21:35:34 -- common/autotest_common.sh@10 -- # set +x 00:21:08.978 ************************************ 00:21:08.978 END TEST nvmf_target_disconnect 00:21:08.978 ************************************ 00:21:08.978 21:35:34 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:21:08.978 21:35:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:08.978 21:35:34 -- common/autotest_common.sh@10 -- # set +x 00:21:08.978 21:35:34 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:21:08.978 00:21:08.978 real 15m41.455s 00:21:08.978 user 36m29.899s 00:21:08.978 sys 4m17.042s 00:21:08.978 21:35:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:08.978 21:35:34 -- common/autotest_common.sh@10 -- # set +x 00:21:08.978 ************************************ 00:21:08.978 END TEST nvmf_tcp 00:21:08.978 ************************************ 00:21:08.978 21:35:34 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:21:08.978 21:35:34 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:21:08.978 21:35:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:08.978 21:35:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:08.978 21:35:34 -- common/autotest_common.sh@10 -- # set +x 00:21:09.236 ************************************ 00:21:09.236 START TEST spdkcli_nvmf_tcp 00:21:09.236 ************************************ 00:21:09.236 21:35:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:21:09.236 * Looking for test storage... 00:21:09.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:21:09.236 21:35:34 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:21:09.236 21:35:34 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:21:09.236 21:35:34 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:21:09.236 21:35:34 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:09.236 21:35:34 -- nvmf/common.sh@7 -- # uname -s 00:21:09.237 21:35:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.237 21:35:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.237 21:35:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.237 21:35:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.237 21:35:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.237 21:35:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.237 21:35:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.237 21:35:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.237 21:35:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.237 21:35:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.237 21:35:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.237 21:35:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.237 21:35:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.237 21:35:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.237 21:35:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:09.237 21:35:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:09.237 21:35:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:09.237 21:35:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.237 21:35:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.237 21:35:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.237 21:35:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.237 21:35:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.237 21:35:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.237 21:35:34 -- paths/export.sh@5 -- # export PATH 00:21:09.237 21:35:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.237 21:35:34 -- nvmf/common.sh@47 -- # : 0 00:21:09.237 21:35:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:09.237 21:35:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:09.237 21:35:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:09.237 21:35:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.237 21:35:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.237 21:35:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:09.237 21:35:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:09.237 21:35:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:09.237 21:35:34 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:21:09.237 21:35:34 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:21:09.237 21:35:34 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:21:09.237 21:35:34 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:21:09.237 21:35:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:09.237 21:35:34 -- common/autotest_common.sh@10 -- # set +x 00:21:09.237 21:35:34 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:21:09.237 21:35:34 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2683010 00:21:09.237 21:35:34 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:21:09.237 21:35:34 -- spdkcli/common.sh@34 -- # waitforlisten 2683010 00:21:09.237 21:35:34 -- common/autotest_common.sh@817 -- # '[' -z 2683010 ']' 00:21:09.237 21:35:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.237 21:35:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:09.237 21:35:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.237 21:35:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:09.237 21:35:34 -- common/autotest_common.sh@10 -- # set +x 00:21:09.237 [2024-04-24 21:35:34.808735] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:21:09.237 [2024-04-24 21:35:34.808808] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2683010 ] 00:21:09.237 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.237 [2024-04-24 21:35:34.882292] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:09.496 [2024-04-24 21:35:35.014416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.496 [2024-04-24 21:35:35.014425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.496 21:35:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:09.496 21:35:35 -- common/autotest_common.sh@850 -- # return 0 00:21:09.496 21:35:35 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:21:09.496 21:35:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:09.496 21:35:35 -- common/autotest_common.sh@10 -- # set +x 00:21:09.496 21:35:35 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:21:09.496 21:35:35 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:21:09.496 21:35:35 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:21:09.496 21:35:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:09.496 21:35:35 -- common/autotest_common.sh@10 -- # set +x 00:21:09.496 21:35:35 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:09.496 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:09.496 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:21:09.496 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:21:09.496 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:21:09.496 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:21:09.496 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:21:09.496 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:09.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:21:09.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:21:09.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:09.496 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:09.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:21:09.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:09.496 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:09.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:21:09.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:09.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:21:09.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:09.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:09.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:21:09.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:21:09.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:21:09.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:21:09.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:09.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:21:09.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:21:09.496 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:21:09.496 ' 00:21:10.065 [2024-04-24 21:35:35.528409] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:12.611 [2024-04-24 21:35:37.688208] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.548 [2024-04-24 21:35:38.924491] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:21:16.086 [2024-04-24 21:35:41.203633] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:21:17.994 [2024-04-24 21:35:43.149741] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:21:19.375 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:21:19.375 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:21:19.375 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:21:19.375 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:21:19.375 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:21:19.375 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:21:19.375 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:21:19.375 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:19.375 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:21:19.375 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:21:19.375 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:19.375 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:19.375 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:21:19.375 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:19.375 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:19.375 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:21:19.375 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:19.375 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:21:19.375 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:19.375 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:19.375 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:21:19.375 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:21:19.375 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:21:19.375 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:21:19.375 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:19.375 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:21:19.375 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:21:19.375 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:21:19.375 21:35:44 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:21:19.375 21:35:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:19.375 21:35:44 -- common/autotest_common.sh@10 -- # set +x 00:21:19.375 21:35:44 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:21:19.375 21:35:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:19.375 21:35:44 -- common/autotest_common.sh@10 -- # set +x 00:21:19.375 21:35:44 -- spdkcli/nvmf.sh@69 -- # check_match 00:21:19.375 21:35:44 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:21:19.633 21:35:45 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:21:19.633 21:35:45 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:21:19.633 21:35:45 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:21:19.633 21:35:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:19.633 21:35:45 -- common/autotest_common.sh@10 -- # set +x 00:21:19.633 21:35:45 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:21:19.633 21:35:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:19.633 21:35:45 -- common/autotest_common.sh@10 -- # set +x 00:21:19.633 21:35:45 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:21:19.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:21:19.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:19.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:21:19.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:21:19.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:21:19.633 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:21:19.633 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:19.633 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:21:19.633 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:21:19.633 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:21:19.633 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:21:19.633 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:21:19.633 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:21:19.633 ' 00:21:24.909 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:21:24.909 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:21:24.909 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:24.909 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:21:24.909 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:21:24.909 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:21:24.909 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:21:24.909 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:24.909 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:21:24.909 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:21:24.909 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:21:24.909 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:21:24.909 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:21:24.909 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:21:24.909 21:35:50 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:21:24.909 21:35:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:24.909 21:35:50 -- common/autotest_common.sh@10 -- # set +x 00:21:24.909 21:35:50 -- spdkcli/nvmf.sh@90 -- # killprocess 2683010 00:21:24.909 21:35:50 -- common/autotest_common.sh@936 -- # '[' -z 2683010 ']' 00:21:24.909 21:35:50 -- common/autotest_common.sh@940 -- # kill -0 2683010 00:21:24.909 21:35:50 -- common/autotest_common.sh@941 -- # uname 00:21:24.909 21:35:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:24.909 21:35:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2683010 00:21:24.909 21:35:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:24.909 21:35:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:24.909 21:35:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2683010' 00:21:24.909 killing process with pid 2683010 00:21:24.909 21:35:50 -- common/autotest_common.sh@955 -- # kill 2683010 00:21:24.909 [2024-04-24 21:35:50.459538] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:24.909 21:35:50 -- common/autotest_common.sh@960 -- # wait 2683010 00:21:25.168 21:35:50 -- spdkcli/nvmf.sh@1 -- # cleanup 00:21:25.168 21:35:50 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:21:25.168 21:35:50 -- spdkcli/common.sh@13 -- # '[' -n 2683010 ']' 00:21:25.168 21:35:50 -- spdkcli/common.sh@14 -- # killprocess 2683010 00:21:25.168 21:35:50 -- common/autotest_common.sh@936 -- # '[' -z 2683010 ']' 00:21:25.168 21:35:50 -- common/autotest_common.sh@940 -- # kill -0 2683010 00:21:25.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2683010) - No such process 00:21:25.168 21:35:50 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2683010 is not found' 00:21:25.168 Process with pid 2683010 is not found 00:21:25.168 21:35:50 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:21:25.168 21:35:50 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:21:25.168 21:35:50 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:21:25.168 00:21:25.168 real 0m16.048s 00:21:25.168 user 0m33.761s 00:21:25.168 sys 0m0.855s 00:21:25.168 21:35:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:25.168 21:35:50 -- common/autotest_common.sh@10 -- # set +x 00:21:25.168 ************************************ 00:21:25.168 END TEST spdkcli_nvmf_tcp 00:21:25.168 ************************************ 00:21:25.168 21:35:50 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:21:25.168 21:35:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:25.168 21:35:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:25.168 21:35:50 -- common/autotest_common.sh@10 -- # set +x 00:21:25.427 ************************************ 00:21:25.427 START TEST nvmf_identify_passthru 00:21:25.427 ************************************ 00:21:25.427 21:35:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:21:25.427 * Looking for test storage... 00:21:25.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:25.427 21:35:50 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:25.427 21:35:50 -- nvmf/common.sh@7 -- # uname -s 00:21:25.427 21:35:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.427 21:35:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.427 21:35:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.427 21:35:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.427 21:35:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.427 21:35:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.427 21:35:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.427 21:35:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.427 21:35:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.427 21:35:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.427 21:35:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.427 21:35:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.427 21:35:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.427 21:35:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.427 21:35:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:25.427 21:35:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.427 21:35:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:25.427 21:35:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.427 21:35:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.427 21:35:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.428 21:35:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.428 21:35:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.428 21:35:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.428 21:35:50 -- paths/export.sh@5 -- # export PATH 00:21:25.428 21:35:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.428 21:35:50 -- nvmf/common.sh@47 -- # : 0 00:21:25.428 21:35:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:25.428 21:35:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:25.428 21:35:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.428 21:35:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.428 21:35:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.428 21:35:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:25.428 21:35:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:25.428 21:35:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:25.428 21:35:50 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:25.428 21:35:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.428 21:35:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.428 21:35:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.428 21:35:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.428 21:35:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.428 21:35:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.428 21:35:50 -- paths/export.sh@5 -- # export PATH 00:21:25.428 21:35:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.428 21:35:50 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:21:25.428 21:35:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:25.428 21:35:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.428 21:35:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:25.428 21:35:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:25.428 21:35:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:25.428 21:35:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.428 21:35:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:25.428 21:35:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.428 21:35:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:25.428 21:35:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:25.428 21:35:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:25.428 21:35:50 -- common/autotest_common.sh@10 -- # set +x 00:21:27.335 21:35:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:27.335 21:35:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:27.335 21:35:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:27.335 21:35:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:27.335 21:35:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:27.335 21:35:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:27.335 21:35:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:27.335 21:35:52 -- nvmf/common.sh@295 -- # net_devs=() 00:21:27.335 21:35:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:27.335 21:35:52 -- nvmf/common.sh@296 -- # e810=() 00:21:27.335 21:35:52 -- nvmf/common.sh@296 -- # local -ga e810 00:21:27.335 21:35:52 -- nvmf/common.sh@297 -- # x722=() 00:21:27.335 21:35:52 -- nvmf/common.sh@297 -- # local -ga x722 00:21:27.335 21:35:52 -- nvmf/common.sh@298 -- # mlx=() 00:21:27.335 21:35:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:27.335 21:35:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.335 21:35:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.335 21:35:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.335 21:35:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.335 21:35:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.335 21:35:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.335 21:35:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.335 21:35:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.335 21:35:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.335 21:35:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.335 21:35:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.335 21:35:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:27.335 21:35:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:27.335 21:35:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:27.335 21:35:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:27.335 21:35:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:27.335 21:35:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:27.335 21:35:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.335 21:35:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:27.335 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:27.335 21:35:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.335 21:35:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.335 21:35:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.335 21:35:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.335 21:35:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.335 21:35:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.335 21:35:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:27.335 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:27.335 21:35:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.335 21:35:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.335 21:35:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.335 21:35:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.335 21:35:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.335 21:35:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:27.335 21:35:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:27.335 21:35:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:27.335 21:35:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.335 21:35:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.335 21:35:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:27.335 21:35:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.335 21:35:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:27.335 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:27.335 21:35:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.335 21:35:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.335 21:35:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.335 21:35:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:27.335 21:35:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.335 21:35:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:27.335 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:27.335 21:35:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.335 21:35:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:27.335 21:35:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:27.335 21:35:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:27.335 21:35:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:27.335 21:35:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:27.335 21:35:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.335 21:35:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.335 21:35:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.335 21:35:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:27.335 21:35:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.335 21:35:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.335 21:35:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:27.335 21:35:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.335 21:35:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.335 21:35:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:27.335 21:35:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:27.335 21:35:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.335 21:35:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.335 21:35:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.335 21:35:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.335 21:35:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:27.335 21:35:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.335 21:35:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.335 21:35:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.335 21:35:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:27.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:21:27.335 00:21:27.335 --- 10.0.0.2 ping statistics --- 00:21:27.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.335 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:21:27.335 21:35:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:21:27.335 00:21:27.335 --- 10.0.0.1 ping statistics --- 00:21:27.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.335 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:21:27.335 21:35:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.335 21:35:52 -- nvmf/common.sh@411 -- # return 0 00:21:27.335 21:35:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:27.335 21:35:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.335 21:35:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:27.335 21:35:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:27.335 21:35:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.335 21:35:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:27.335 21:35:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:27.335 21:35:52 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:21:27.336 21:35:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:27.336 21:35:52 -- common/autotest_common.sh@10 -- # set +x 00:21:27.336 21:35:52 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:21:27.336 21:35:52 -- common/autotest_common.sh@1510 -- # bdfs=() 00:21:27.336 21:35:52 -- common/autotest_common.sh@1510 -- # local bdfs 00:21:27.336 21:35:52 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:21:27.336 21:35:52 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:21:27.336 21:35:52 -- common/autotest_common.sh@1499 -- # bdfs=() 00:21:27.336 21:35:52 -- common/autotest_common.sh@1499 -- # local bdfs 00:21:27.336 21:35:52 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:27.336 21:35:52 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:27.336 21:35:52 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:21:27.593 21:35:53 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:21:27.593 21:35:53 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:21:27.593 21:35:53 -- common/autotest_common.sh@1513 -- # echo 0000:88:00.0 00:21:27.593 21:35:53 -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:21:27.593 21:35:53 -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:21:27.593 21:35:53 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:21:27.593 21:35:53 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:21:27.593 21:35:53 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:21:27.593 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.791 21:35:57 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:21:31.792 21:35:57 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:21:31.792 21:35:57 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:21:31.792 21:35:57 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:21:31.792 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.026 21:36:01 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:21:36.026 21:36:01 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:21:36.026 21:36:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:36.026 21:36:01 -- common/autotest_common.sh@10 -- # set +x 00:21:36.026 21:36:01 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:21:36.026 21:36:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:36.026 21:36:01 -- common/autotest_common.sh@10 -- # set +x 00:21:36.026 21:36:01 -- target/identify_passthru.sh@31 -- # nvmfpid=2687619 00:21:36.026 21:36:01 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:36.026 21:36:01 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:36.026 21:36:01 -- target/identify_passthru.sh@35 -- # waitforlisten 2687619 00:21:36.026 21:36:01 -- common/autotest_common.sh@817 -- # '[' -z 2687619 ']' 00:21:36.026 21:36:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.026 21:36:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:36.026 21:36:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.026 21:36:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:36.026 21:36:01 -- common/autotest_common.sh@10 -- # set +x 00:21:36.026 [2024-04-24 21:36:01.521721] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:21:36.026 [2024-04-24 21:36:01.521798] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.026 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.026 [2024-04-24 21:36:01.589278] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:36.284 [2024-04-24 21:36:01.703477] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.284 [2024-04-24 21:36:01.703531] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.284 [2024-04-24 21:36:01.703546] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.284 [2024-04-24 21:36:01.703559] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.284 [2024-04-24 21:36:01.703570] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.284 [2024-04-24 21:36:01.703652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.284 [2024-04-24 21:36:01.703682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.284 [2024-04-24 21:36:01.703729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:36.284 [2024-04-24 21:36:01.703732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.284 21:36:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:36.284 21:36:01 -- common/autotest_common.sh@850 -- # return 0 00:21:36.284 21:36:01 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:21:36.284 21:36:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.284 21:36:01 -- common/autotest_common.sh@10 -- # set +x 00:21:36.284 INFO: Log level set to 20 00:21:36.284 INFO: Requests: 00:21:36.284 { 00:21:36.284 "jsonrpc": "2.0", 00:21:36.284 "method": "nvmf_set_config", 00:21:36.284 "id": 1, 00:21:36.284 "params": { 00:21:36.284 "admin_cmd_passthru": { 00:21:36.284 "identify_ctrlr": true 00:21:36.284 } 00:21:36.284 } 00:21:36.284 } 00:21:36.284 00:21:36.284 INFO: response: 00:21:36.284 { 00:21:36.284 "jsonrpc": "2.0", 00:21:36.284 "id": 1, 00:21:36.284 "result": true 00:21:36.284 } 00:21:36.284 00:21:36.284 21:36:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.284 21:36:01 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:21:36.284 21:36:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.284 21:36:01 -- common/autotest_common.sh@10 -- # set +x 00:21:36.284 INFO: Setting log level to 20 00:21:36.284 INFO: Setting log level to 20 00:21:36.284 INFO: Log level set to 20 00:21:36.284 INFO: Log level set to 20 00:21:36.284 INFO: Requests: 00:21:36.284 { 00:21:36.284 "jsonrpc": "2.0", 00:21:36.284 "method": "framework_start_init", 00:21:36.284 "id": 1 00:21:36.284 } 00:21:36.284 00:21:36.284 INFO: Requests: 00:21:36.284 { 00:21:36.284 "jsonrpc": "2.0", 00:21:36.284 "method": "framework_start_init", 00:21:36.284 "id": 1 00:21:36.284 } 00:21:36.284 00:21:36.284 [2024-04-24 21:36:01.855934] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:21:36.284 INFO: response: 00:21:36.284 { 00:21:36.284 "jsonrpc": "2.0", 00:21:36.284 "id": 1, 00:21:36.284 "result": true 00:21:36.284 } 00:21:36.284 00:21:36.284 INFO: response: 00:21:36.284 { 00:21:36.284 "jsonrpc": "2.0", 00:21:36.284 "id": 1, 00:21:36.284 "result": true 00:21:36.284 } 00:21:36.284 00:21:36.284 21:36:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.284 21:36:01 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:36.284 21:36:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.284 21:36:01 -- common/autotest_common.sh@10 -- # set +x 00:21:36.284 INFO: Setting log level to 40 00:21:36.284 INFO: Setting log level to 40 00:21:36.284 INFO: Setting log level to 40 00:21:36.284 [2024-04-24 21:36:01.865880] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.284 21:36:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.284 21:36:01 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:21:36.284 21:36:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:36.284 21:36:01 -- common/autotest_common.sh@10 -- # set +x 00:21:36.284 21:36:01 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:21:36.284 21:36:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.284 21:36:01 -- common/autotest_common.sh@10 -- # set +x 00:21:39.574 Nvme0n1 00:21:39.574 21:36:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.574 21:36:04 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:21:39.574 21:36:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.574 21:36:04 -- common/autotest_common.sh@10 -- # set +x 00:21:39.574 21:36:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.574 21:36:04 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:39.574 21:36:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.574 21:36:04 -- common/autotest_common.sh@10 -- # set +x 00:21:39.574 21:36:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.575 21:36:04 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:39.575 21:36:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.575 21:36:04 -- common/autotest_common.sh@10 -- # set +x 00:21:39.575 [2024-04-24 21:36:04.751541] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.575 21:36:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.575 21:36:04 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:21:39.575 21:36:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.575 21:36:04 -- common/autotest_common.sh@10 -- # set +x 00:21:39.575 [2024-04-24 21:36:04.759305] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:39.575 [ 00:21:39.575 { 00:21:39.575 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:39.575 "subtype": "Discovery", 00:21:39.575 "listen_addresses": [], 00:21:39.575 "allow_any_host": true, 00:21:39.575 "hosts": [] 00:21:39.575 }, 00:21:39.575 { 00:21:39.575 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.575 "subtype": "NVMe", 00:21:39.575 "listen_addresses": [ 00:21:39.575 { 00:21:39.575 "transport": "TCP", 00:21:39.575 "trtype": "TCP", 00:21:39.575 "adrfam": "IPv4", 00:21:39.575 "traddr": "10.0.0.2", 00:21:39.575 "trsvcid": "4420" 00:21:39.575 } 00:21:39.575 ], 00:21:39.575 "allow_any_host": true, 00:21:39.575 "hosts": [], 00:21:39.575 "serial_number": "SPDK00000000000001", 00:21:39.575 "model_number": "SPDK bdev Controller", 00:21:39.575 "max_namespaces": 1, 00:21:39.575 "min_cntlid": 1, 00:21:39.575 "max_cntlid": 65519, 00:21:39.575 "namespaces": [ 00:21:39.575 { 00:21:39.575 "nsid": 1, 00:21:39.575 "bdev_name": "Nvme0n1", 00:21:39.575 "name": "Nvme0n1", 00:21:39.575 "nguid": "139FDD021A2B4098B328DAA2D66DC233", 00:21:39.575 "uuid": "139fdd02-1a2b-4098-b328-daa2d66dc233" 00:21:39.575 } 00:21:39.575 ] 00:21:39.575 } 00:21:39.575 ] 00:21:39.575 21:36:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.575 21:36:04 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:39.575 21:36:04 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:21:39.575 21:36:04 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:21:39.575 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.575 21:36:04 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:21:39.575 21:36:04 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:39.575 21:36:04 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:21:39.575 21:36:04 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:21:39.575 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.575 21:36:05 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:21:39.575 21:36:05 -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:21:39.575 21:36:05 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:21:39.575 21:36:05 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:39.575 21:36:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.575 21:36:05 -- common/autotest_common.sh@10 -- # set +x 00:21:39.575 21:36:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.575 21:36:05 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:21:39.575 21:36:05 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:21:39.575 21:36:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:39.575 21:36:05 -- nvmf/common.sh@117 -- # sync 00:21:39.575 21:36:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:39.575 21:36:05 -- nvmf/common.sh@120 -- # set +e 00:21:39.575 21:36:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:39.575 21:36:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:39.575 rmmod nvme_tcp 00:21:39.575 rmmod nvme_fabrics 00:21:39.575 rmmod nvme_keyring 00:21:39.575 21:36:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:39.575 21:36:05 -- nvmf/common.sh@124 -- # set -e 00:21:39.575 21:36:05 -- nvmf/common.sh@125 -- # return 0 00:21:39.575 21:36:05 -- nvmf/common.sh@478 -- # '[' -n 2687619 ']' 00:21:39.575 21:36:05 -- nvmf/common.sh@479 -- # killprocess 2687619 00:21:39.575 21:36:05 -- common/autotest_common.sh@936 -- # '[' -z 2687619 ']' 00:21:39.575 21:36:05 -- common/autotest_common.sh@940 -- # kill -0 2687619 00:21:39.575 21:36:05 -- common/autotest_common.sh@941 -- # uname 00:21:39.575 21:36:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:39.575 21:36:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2687619 00:21:39.575 21:36:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:39.575 21:36:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:39.575 21:36:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2687619' 00:21:39.575 killing process with pid 2687619 00:21:39.575 21:36:05 -- common/autotest_common.sh@955 -- # kill 2687619 00:21:39.575 [2024-04-24 21:36:05.213305] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:39.575 21:36:05 -- common/autotest_common.sh@960 -- # wait 2687619 00:21:41.477 21:36:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:41.477 21:36:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:41.477 21:36:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:41.477 21:36:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:41.477 21:36:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:41.477 21:36:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.477 21:36:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:41.477 21:36:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.380 21:36:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:43.380 00:21:43.380 real 0m18.004s 00:21:43.380 user 0m26.729s 00:21:43.380 sys 0m2.307s 00:21:43.380 21:36:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:43.380 21:36:08 -- common/autotest_common.sh@10 -- # set +x 00:21:43.380 ************************************ 00:21:43.380 END TEST nvmf_identify_passthru 00:21:43.380 ************************************ 00:21:43.380 21:36:08 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:21:43.380 21:36:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:43.380 21:36:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:43.380 21:36:08 -- common/autotest_common.sh@10 -- # set +x 00:21:43.380 ************************************ 00:21:43.380 START TEST nvmf_dif 00:21:43.380 ************************************ 00:21:43.380 21:36:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:21:43.380 * Looking for test storage... 00:21:43.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:43.380 21:36:09 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.380 21:36:09 -- nvmf/common.sh@7 -- # uname -s 00:21:43.380 21:36:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.380 21:36:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.380 21:36:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.380 21:36:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.380 21:36:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.380 21:36:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.380 21:36:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.380 21:36:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.380 21:36:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.380 21:36:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.380 21:36:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.380 21:36:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.380 21:36:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.380 21:36:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.380 21:36:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.380 21:36:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.380 21:36:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:43.380 21:36:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.380 21:36:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.380 21:36:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.380 21:36:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.380 21:36:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.380 21:36:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.380 21:36:09 -- paths/export.sh@5 -- # export PATH 00:21:43.380 21:36:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.380 21:36:09 -- nvmf/common.sh@47 -- # : 0 00:21:43.380 21:36:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:43.380 21:36:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:43.380 21:36:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.380 21:36:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.380 21:36:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.380 21:36:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:43.380 21:36:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:43.380 21:36:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:43.380 21:36:09 -- target/dif.sh@15 -- # NULL_META=16 00:21:43.380 21:36:09 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:21:43.380 21:36:09 -- target/dif.sh@15 -- # NULL_SIZE=64 00:21:43.380 21:36:09 -- target/dif.sh@15 -- # NULL_DIF=1 00:21:43.380 21:36:09 -- target/dif.sh@135 -- # nvmftestinit 00:21:43.380 21:36:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:43.380 21:36:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.380 21:36:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:43.380 21:36:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:43.380 21:36:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:43.380 21:36:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.380 21:36:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:43.380 21:36:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.380 21:36:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:43.380 21:36:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:43.380 21:36:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:43.380 21:36:09 -- common/autotest_common.sh@10 -- # set +x 00:21:45.282 21:36:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:45.282 21:36:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:45.282 21:36:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:45.282 21:36:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:45.282 21:36:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:45.282 21:36:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:45.282 21:36:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:45.282 21:36:10 -- nvmf/common.sh@295 -- # net_devs=() 00:21:45.282 21:36:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:45.282 21:36:10 -- nvmf/common.sh@296 -- # e810=() 00:21:45.282 21:36:10 -- nvmf/common.sh@296 -- # local -ga e810 00:21:45.282 21:36:10 -- nvmf/common.sh@297 -- # x722=() 00:21:45.282 21:36:10 -- nvmf/common.sh@297 -- # local -ga x722 00:21:45.282 21:36:10 -- nvmf/common.sh@298 -- # mlx=() 00:21:45.282 21:36:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:45.282 21:36:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.282 21:36:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.282 21:36:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.282 21:36:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.282 21:36:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.282 21:36:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.282 21:36:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.282 21:36:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.282 21:36:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.282 21:36:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.282 21:36:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.282 21:36:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:45.282 21:36:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:45.282 21:36:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:45.282 21:36:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:45.282 21:36:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:45.282 21:36:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:45.282 21:36:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:45.282 21:36:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:45.282 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:45.282 21:36:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:45.282 21:36:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:45.282 21:36:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.282 21:36:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.282 21:36:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:45.282 21:36:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:45.282 21:36:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:45.282 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:45.282 21:36:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:45.282 21:36:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:45.282 21:36:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.282 21:36:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.282 21:36:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:45.282 21:36:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:45.282 21:36:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:45.282 21:36:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:45.282 21:36:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:45.282 21:36:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.282 21:36:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:45.282 21:36:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.282 21:36:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:45.282 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:45.282 21:36:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.282 21:36:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:45.282 21:36:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.282 21:36:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:45.282 21:36:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.282 21:36:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:45.282 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:45.282 21:36:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.282 21:36:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:45.282 21:36:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:45.283 21:36:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:45.283 21:36:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:45.283 21:36:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:45.283 21:36:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.283 21:36:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.283 21:36:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.283 21:36:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:45.283 21:36:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:45.283 21:36:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:45.283 21:36:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:45.283 21:36:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:45.283 21:36:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.283 21:36:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:45.283 21:36:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:45.283 21:36:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:45.283 21:36:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:45.283 21:36:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:45.283 21:36:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:45.283 21:36:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:45.283 21:36:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:45.541 21:36:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:45.541 21:36:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:45.541 21:36:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:45.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:21:45.541 00:21:45.541 --- 10.0.0.2 ping statistics --- 00:21:45.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.541 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:21:45.541 21:36:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:45.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:21:45.541 00:21:45.541 --- 10.0.0.1 ping statistics --- 00:21:45.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.541 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:21:45.541 21:36:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.541 21:36:11 -- nvmf/common.sh@411 -- # return 0 00:21:45.541 21:36:11 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:21:45.541 21:36:11 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:21:46.479 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:21:46.479 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:21:46.479 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:21:46.479 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:21:46.479 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:21:46.479 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:21:46.479 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:21:46.479 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:21:46.480 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:21:46.480 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:21:46.480 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:21:46.480 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:21:46.480 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:21:46.480 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:21:46.480 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:21:46.480 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:21:46.480 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:21:46.738 21:36:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.738 21:36:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:46.738 21:36:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:46.738 21:36:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.738 21:36:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:46.738 21:36:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:46.738 21:36:12 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:21:46.738 21:36:12 -- target/dif.sh@137 -- # nvmfappstart 00:21:46.738 21:36:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:46.738 21:36:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:46.738 21:36:12 -- common/autotest_common.sh@10 -- # set +x 00:21:46.738 21:36:12 -- nvmf/common.sh@470 -- # nvmfpid=2691410 00:21:46.738 21:36:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:46.738 21:36:12 -- nvmf/common.sh@471 -- # waitforlisten 2691410 00:21:46.738 21:36:12 -- common/autotest_common.sh@817 -- # '[' -z 2691410 ']' 00:21:46.738 21:36:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.738 21:36:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:46.738 21:36:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.738 21:36:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:46.738 21:36:12 -- common/autotest_common.sh@10 -- # set +x 00:21:46.738 [2024-04-24 21:36:12.291897] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:21:46.738 [2024-04-24 21:36:12.291999] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.738 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.738 [2024-04-24 21:36:12.361706] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.998 [2024-04-24 21:36:12.478777] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.998 [2024-04-24 21:36:12.478837] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.998 [2024-04-24 21:36:12.478863] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.998 [2024-04-24 21:36:12.478876] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.998 [2024-04-24 21:36:12.478889] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.998 [2024-04-24 21:36:12.478920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.571 21:36:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:47.571 21:36:13 -- common/autotest_common.sh@850 -- # return 0 00:21:47.571 21:36:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:47.571 21:36:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:47.571 21:36:13 -- common/autotest_common.sh@10 -- # set +x 00:21:47.571 21:36:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.571 21:36:13 -- target/dif.sh@139 -- # create_transport 00:21:47.571 21:36:13 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:21:47.571 21:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.571 21:36:13 -- common/autotest_common.sh@10 -- # set +x 00:21:47.832 [2024-04-24 21:36:13.251553] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.832 21:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.832 21:36:13 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:21:47.832 21:36:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:47.832 21:36:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:47.832 21:36:13 -- common/autotest_common.sh@10 -- # set +x 00:21:47.832 ************************************ 00:21:47.832 START TEST fio_dif_1_default 00:21:47.832 ************************************ 00:21:47.832 21:36:13 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:21:47.832 21:36:13 -- target/dif.sh@86 -- # create_subsystems 0 00:21:47.832 21:36:13 -- target/dif.sh@28 -- # local sub 00:21:47.832 21:36:13 -- target/dif.sh@30 -- # for sub in "$@" 00:21:47.832 21:36:13 -- target/dif.sh@31 -- # create_subsystem 0 00:21:47.832 21:36:13 -- target/dif.sh@18 -- # local sub_id=0 00:21:47.832 21:36:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:47.832 21:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.832 21:36:13 -- common/autotest_common.sh@10 -- # set +x 00:21:47.832 bdev_null0 00:21:47.832 21:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.832 21:36:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:47.832 21:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.832 21:36:13 -- common/autotest_common.sh@10 -- # set +x 00:21:47.832 21:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.832 21:36:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:47.832 21:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.832 21:36:13 -- common/autotest_common.sh@10 -- # set +x 00:21:47.832 21:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.832 21:36:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:47.832 21:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.832 21:36:13 -- common/autotest_common.sh@10 -- # set +x 00:21:47.832 [2024-04-24 21:36:13.380074] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.832 21:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.832 21:36:13 -- target/dif.sh@87 -- # fio /dev/fd/62 00:21:47.832 21:36:13 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:21:47.832 21:36:13 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:47.832 21:36:13 -- nvmf/common.sh@521 -- # config=() 00:21:47.832 21:36:13 -- nvmf/common.sh@521 -- # local subsystem config 00:21:47.832 21:36:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:47.832 21:36:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:47.832 { 00:21:47.832 "params": { 00:21:47.832 "name": "Nvme$subsystem", 00:21:47.832 "trtype": "$TEST_TRANSPORT", 00:21:47.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.832 "adrfam": "ipv4", 00:21:47.832 "trsvcid": "$NVMF_PORT", 00:21:47.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.833 "hdgst": ${hdgst:-false}, 00:21:47.833 "ddgst": ${ddgst:-false} 00:21:47.833 }, 00:21:47.833 "method": "bdev_nvme_attach_controller" 00:21:47.833 } 00:21:47.833 EOF 00:21:47.833 )") 00:21:47.833 21:36:13 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:47.833 21:36:13 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:47.833 21:36:13 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:47.833 21:36:13 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:47.833 21:36:13 -- target/dif.sh@82 -- # gen_fio_conf 00:21:47.833 21:36:13 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:47.833 21:36:13 -- target/dif.sh@54 -- # local file 00:21:47.833 21:36:13 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:47.833 21:36:13 -- common/autotest_common.sh@1327 -- # shift 00:21:47.833 21:36:13 -- target/dif.sh@56 -- # cat 00:21:47.833 21:36:13 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:47.833 21:36:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:47.833 21:36:13 -- nvmf/common.sh@543 -- # cat 00:21:47.833 21:36:13 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:47.833 21:36:13 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:47.833 21:36:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:47.833 21:36:13 -- target/dif.sh@72 -- # (( file = 1 )) 00:21:47.833 21:36:13 -- target/dif.sh@72 -- # (( file <= files )) 00:21:47.833 21:36:13 -- nvmf/common.sh@545 -- # jq . 00:21:47.833 21:36:13 -- nvmf/common.sh@546 -- # IFS=, 00:21:47.833 21:36:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:47.833 "params": { 00:21:47.833 "name": "Nvme0", 00:21:47.833 "trtype": "tcp", 00:21:47.833 "traddr": "10.0.0.2", 00:21:47.833 "adrfam": "ipv4", 00:21:47.833 "trsvcid": "4420", 00:21:47.833 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:47.833 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:47.833 "hdgst": false, 00:21:47.833 "ddgst": false 00:21:47.833 }, 00:21:47.833 "method": "bdev_nvme_attach_controller" 00:21:47.833 }' 00:21:47.833 21:36:13 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:47.833 21:36:13 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:47.833 21:36:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:47.833 21:36:13 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:47.833 21:36:13 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:21:47.833 21:36:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:47.833 21:36:13 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:47.833 21:36:13 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:47.833 21:36:13 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:21:47.833 21:36:13 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:48.092 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:48.092 fio-3.35 00:21:48.092 Starting 1 thread 00:21:48.092 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.307 00:22:00.307 filename0: (groupid=0, jobs=1): err= 0: pid=2691657: Wed Apr 24 21:36:24 2024 00:22:00.307 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10005msec) 00:22:00.307 slat (nsec): min=4965, max=89354, avg=11110.25, stdev=5736.82 00:22:00.307 clat (usec): min=40901, max=43636, avg=41826.83, stdev=399.18 00:22:00.307 lat (usec): min=40909, max=43665, avg=41837.94, stdev=399.87 00:22:00.307 clat percentiles (usec): 00:22:00.307 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:22:00.307 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:22:00.307 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:22:00.307 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:22:00.307 | 99.99th=[43779] 00:22:00.307 bw ( KiB/s): min= 352, max= 384, per=99.42%, avg=380.80, stdev= 9.85, samples=20 00:22:00.307 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:22:00.307 lat (msec) : 50=100.00% 00:22:00.307 cpu : usr=88.96%, sys=10.74%, ctx=13, majf=0, minf=233 00:22:00.307 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:00.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.307 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:00.307 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:00.307 00:22:00.307 Run status group 0 (all jobs): 00:22:00.307 READ: bw=382KiB/s (391kB/s), 382KiB/s-382KiB/s (391kB/s-391kB/s), io=3824KiB (3916kB), run=10005-10005msec 00:22:00.307 21:36:24 -- target/dif.sh@88 -- # destroy_subsystems 0 00:22:00.307 21:36:24 -- target/dif.sh@43 -- # local sub 00:22:00.307 21:36:24 -- target/dif.sh@45 -- # for sub in "$@" 00:22:00.307 21:36:24 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:00.307 21:36:24 -- target/dif.sh@36 -- # local sub_id=0 00:22:00.307 21:36:24 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:00.307 21:36:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.307 21:36:24 -- common/autotest_common.sh@10 -- # set +x 00:22:00.307 21:36:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.307 21:36:24 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:00.307 21:36:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.307 21:36:24 -- common/autotest_common.sh@10 -- # set +x 00:22:00.307 21:36:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.307 00:22:00.307 real 0m11.180s 00:22:00.307 user 0m10.176s 00:22:00.307 sys 0m1.380s 00:22:00.307 21:36:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:00.307 21:36:24 -- common/autotest_common.sh@10 -- # set +x 00:22:00.307 ************************************ 00:22:00.307 END TEST fio_dif_1_default 00:22:00.307 ************************************ 00:22:00.307 21:36:24 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:22:00.307 21:36:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:00.307 21:36:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:00.307 21:36:24 -- common/autotest_common.sh@10 -- # set +x 00:22:00.307 ************************************ 00:22:00.307 START TEST fio_dif_1_multi_subsystems 00:22:00.307 ************************************ 00:22:00.307 21:36:24 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:22:00.307 21:36:24 -- target/dif.sh@92 -- # local files=1 00:22:00.307 21:36:24 -- target/dif.sh@94 -- # create_subsystems 0 1 00:22:00.307 21:36:24 -- target/dif.sh@28 -- # local sub 00:22:00.307 21:36:24 -- target/dif.sh@30 -- # for sub in "$@" 00:22:00.307 21:36:24 -- target/dif.sh@31 -- # create_subsystem 0 00:22:00.307 21:36:24 -- target/dif.sh@18 -- # local sub_id=0 00:22:00.307 21:36:24 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:00.307 21:36:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.307 21:36:24 -- common/autotest_common.sh@10 -- # set +x 00:22:00.307 bdev_null0 00:22:00.307 21:36:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.307 21:36:24 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:00.307 21:36:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.307 21:36:24 -- common/autotest_common.sh@10 -- # set +x 00:22:00.307 21:36:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.307 21:36:24 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:00.307 21:36:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.307 21:36:24 -- common/autotest_common.sh@10 -- # set +x 00:22:00.307 21:36:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.307 21:36:24 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:00.307 21:36:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.307 21:36:24 -- common/autotest_common.sh@10 -- # set +x 00:22:00.307 [2024-04-24 21:36:24.669586] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.307 21:36:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.307 21:36:24 -- target/dif.sh@30 -- # for sub in "$@" 00:22:00.307 21:36:24 -- target/dif.sh@31 -- # create_subsystem 1 00:22:00.307 21:36:24 -- target/dif.sh@18 -- # local sub_id=1 00:22:00.308 21:36:24 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:00.308 21:36:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.308 21:36:24 -- common/autotest_common.sh@10 -- # set +x 00:22:00.308 bdev_null1 00:22:00.308 21:36:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.308 21:36:24 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:00.308 21:36:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.308 21:36:24 -- common/autotest_common.sh@10 -- # set +x 00:22:00.308 21:36:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.308 21:36:24 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:00.308 21:36:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.308 21:36:24 -- common/autotest_common.sh@10 -- # set +x 00:22:00.308 21:36:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.308 21:36:24 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.308 21:36:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.308 21:36:24 -- common/autotest_common.sh@10 -- # set +x 00:22:00.308 21:36:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.308 21:36:24 -- target/dif.sh@95 -- # fio /dev/fd/62 00:22:00.308 21:36:24 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:22:00.308 21:36:24 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:00.308 21:36:24 -- nvmf/common.sh@521 -- # config=() 00:22:00.308 21:36:24 -- nvmf/common.sh@521 -- # local subsystem config 00:22:00.308 21:36:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:00.308 21:36:24 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:00.308 21:36:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:00.308 { 00:22:00.308 "params": { 00:22:00.308 "name": "Nvme$subsystem", 00:22:00.308 "trtype": "$TEST_TRANSPORT", 00:22:00.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.308 "adrfam": "ipv4", 00:22:00.308 "trsvcid": "$NVMF_PORT", 00:22:00.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.308 "hdgst": ${hdgst:-false}, 00:22:00.308 "ddgst": ${ddgst:-false} 00:22:00.308 }, 00:22:00.308 "method": "bdev_nvme_attach_controller" 00:22:00.308 } 00:22:00.308 EOF 00:22:00.308 )") 00:22:00.308 21:36:24 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:00.308 21:36:24 -- target/dif.sh@82 -- # gen_fio_conf 00:22:00.308 21:36:24 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:00.308 21:36:24 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:00.308 21:36:24 -- target/dif.sh@54 -- # local file 00:22:00.308 21:36:24 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:00.308 21:36:24 -- target/dif.sh@56 -- # cat 00:22:00.308 21:36:24 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:00.308 21:36:24 -- common/autotest_common.sh@1327 -- # shift 00:22:00.308 21:36:24 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:00.308 21:36:24 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:00.308 21:36:24 -- nvmf/common.sh@543 -- # cat 00:22:00.308 21:36:24 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:00.308 21:36:24 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:00.308 21:36:24 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:00.308 21:36:24 -- target/dif.sh@72 -- # (( file <= files )) 00:22:00.308 21:36:24 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:00.308 21:36:24 -- target/dif.sh@73 -- # cat 00:22:00.308 21:36:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:00.308 21:36:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:00.308 { 00:22:00.308 "params": { 00:22:00.308 "name": "Nvme$subsystem", 00:22:00.308 "trtype": "$TEST_TRANSPORT", 00:22:00.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.308 "adrfam": "ipv4", 00:22:00.308 "trsvcid": "$NVMF_PORT", 00:22:00.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.308 "hdgst": ${hdgst:-false}, 00:22:00.308 "ddgst": ${ddgst:-false} 00:22:00.308 }, 00:22:00.308 "method": "bdev_nvme_attach_controller" 00:22:00.308 } 00:22:00.308 EOF 00:22:00.308 )") 00:22:00.308 21:36:24 -- nvmf/common.sh@543 -- # cat 00:22:00.308 21:36:24 -- target/dif.sh@72 -- # (( file++ )) 00:22:00.308 21:36:24 -- target/dif.sh@72 -- # (( file <= files )) 00:22:00.308 21:36:24 -- nvmf/common.sh@545 -- # jq . 00:22:00.308 21:36:24 -- nvmf/common.sh@546 -- # IFS=, 00:22:00.308 21:36:24 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:00.308 "params": { 00:22:00.308 "name": "Nvme0", 00:22:00.308 "trtype": "tcp", 00:22:00.308 "traddr": "10.0.0.2", 00:22:00.308 "adrfam": "ipv4", 00:22:00.308 "trsvcid": "4420", 00:22:00.308 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:00.308 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:00.308 "hdgst": false, 00:22:00.308 "ddgst": false 00:22:00.308 }, 00:22:00.308 "method": "bdev_nvme_attach_controller" 00:22:00.308 },{ 00:22:00.308 "params": { 00:22:00.308 "name": "Nvme1", 00:22:00.308 "trtype": "tcp", 00:22:00.308 "traddr": "10.0.0.2", 00:22:00.308 "adrfam": "ipv4", 00:22:00.308 "trsvcid": "4420", 00:22:00.308 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.308 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:00.308 "hdgst": false, 00:22:00.308 "ddgst": false 00:22:00.308 }, 00:22:00.308 "method": "bdev_nvme_attach_controller" 00:22:00.308 }' 00:22:00.308 21:36:24 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:00.308 21:36:24 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:00.308 21:36:24 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:00.308 21:36:24 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:00.308 21:36:24 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:00.308 21:36:24 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:00.308 21:36:24 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:00.308 21:36:24 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:00.308 21:36:24 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:00.308 21:36:24 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:00.308 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:00.308 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:00.308 fio-3.35 00:22:00.308 Starting 2 threads 00:22:00.308 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.284 00:22:10.284 filename0: (groupid=0, jobs=1): err= 0: pid=2693074: Wed Apr 24 21:36:35 2024 00:22:10.284 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10033msec) 00:22:10.284 slat (nsec): min=6240, max=38414, avg=10507.14, stdev=5028.09 00:22:10.284 clat (usec): min=40958, max=43690, avg=41944.76, stdev=199.93 00:22:10.284 lat (usec): min=40965, max=43729, avg=41955.27, stdev=200.36 00:22:10.284 clat percentiles (usec): 00:22:10.284 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:22:10.284 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:22:10.284 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:22:10.284 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:22:10.284 | 99.99th=[43779] 00:22:10.284 bw ( KiB/s): min= 352, max= 384, per=49.86%, avg=380.80, stdev= 9.85, samples=20 00:22:10.284 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:22:10.284 lat (msec) : 50=100.00% 00:22:10.284 cpu : usr=94.20%, sys=5.51%, ctx=13, majf=0, minf=163 00:22:10.284 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:10.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.284 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.284 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:10.284 filename1: (groupid=0, jobs=1): err= 0: pid=2693075: Wed Apr 24 21:36:35 2024 00:22:10.284 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10034msec) 00:22:10.284 slat (nsec): min=7047, max=63123, avg=11273.24, stdev=5571.14 00:22:10.284 clat (usec): min=40936, max=43035, avg=41946.07, stdev=196.94 00:22:10.284 lat (usec): min=40944, max=43049, avg=41957.34, stdev=197.28 00:22:10.284 clat percentiles (usec): 00:22:10.284 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:22:10.284 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:22:10.284 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:22:10.284 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:22:10.284 | 99.99th=[43254] 00:22:10.284 bw ( KiB/s): min= 352, max= 384, per=49.86%, avg=380.80, stdev= 9.85, samples=20 00:22:10.284 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:22:10.284 lat (msec) : 50=100.00% 00:22:10.284 cpu : usr=94.83%, sys=4.87%, ctx=13, majf=0, minf=132 00:22:10.284 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:10.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.284 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.284 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:10.284 00:22:10.284 Run status group 0 (all jobs): 00:22:10.284 READ: bw=762KiB/s (781kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=7648KiB (7832kB), run=10033-10034msec 00:22:10.542 21:36:36 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:22:10.542 21:36:36 -- target/dif.sh@43 -- # local sub 00:22:10.542 21:36:36 -- target/dif.sh@45 -- # for sub in "$@" 00:22:10.542 21:36:36 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:10.542 21:36:36 -- target/dif.sh@36 -- # local sub_id=0 00:22:10.542 21:36:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:10.542 21:36:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.542 21:36:36 -- common/autotest_common.sh@10 -- # set +x 00:22:10.542 21:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.542 21:36:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:10.542 21:36:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.542 21:36:36 -- common/autotest_common.sh@10 -- # set +x 00:22:10.542 21:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.542 21:36:36 -- target/dif.sh@45 -- # for sub in "$@" 00:22:10.542 21:36:36 -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:10.542 21:36:36 -- target/dif.sh@36 -- # local sub_id=1 00:22:10.542 21:36:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:10.542 21:36:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.542 21:36:36 -- common/autotest_common.sh@10 -- # set +x 00:22:10.542 21:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.542 21:36:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:10.542 21:36:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.542 21:36:36 -- common/autotest_common.sh@10 -- # set +x 00:22:10.542 21:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.542 00:22:10.542 real 0m11.484s 00:22:10.542 user 0m20.277s 00:22:10.542 sys 0m1.335s 00:22:10.542 21:36:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:10.542 21:36:36 -- common/autotest_common.sh@10 -- # set +x 00:22:10.542 ************************************ 00:22:10.542 END TEST fio_dif_1_multi_subsystems 00:22:10.542 ************************************ 00:22:10.542 21:36:36 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:22:10.542 21:36:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:10.542 21:36:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:10.542 21:36:36 -- common/autotest_common.sh@10 -- # set +x 00:22:10.800 ************************************ 00:22:10.800 START TEST fio_dif_rand_params 00:22:10.800 ************************************ 00:22:10.800 21:36:36 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:22:10.800 21:36:36 -- target/dif.sh@100 -- # local NULL_DIF 00:22:10.800 21:36:36 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:22:10.800 21:36:36 -- target/dif.sh@103 -- # NULL_DIF=3 00:22:10.800 21:36:36 -- target/dif.sh@103 -- # bs=128k 00:22:10.800 21:36:36 -- target/dif.sh@103 -- # numjobs=3 00:22:10.800 21:36:36 -- target/dif.sh@103 -- # iodepth=3 00:22:10.800 21:36:36 -- target/dif.sh@103 -- # runtime=5 00:22:10.800 21:36:36 -- target/dif.sh@105 -- # create_subsystems 0 00:22:10.800 21:36:36 -- target/dif.sh@28 -- # local sub 00:22:10.800 21:36:36 -- target/dif.sh@30 -- # for sub in "$@" 00:22:10.800 21:36:36 -- target/dif.sh@31 -- # create_subsystem 0 00:22:10.800 21:36:36 -- target/dif.sh@18 -- # local sub_id=0 00:22:10.800 21:36:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:10.800 21:36:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.800 21:36:36 -- common/autotest_common.sh@10 -- # set +x 00:22:10.800 bdev_null0 00:22:10.800 21:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.800 21:36:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:10.800 21:36:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.800 21:36:36 -- common/autotest_common.sh@10 -- # set +x 00:22:10.800 21:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.800 21:36:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:10.800 21:36:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.800 21:36:36 -- common/autotest_common.sh@10 -- # set +x 00:22:10.800 21:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.800 21:36:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:10.800 21:36:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.800 21:36:36 -- common/autotest_common.sh@10 -- # set +x 00:22:10.800 [2024-04-24 21:36:36.286435] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.800 21:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.800 21:36:36 -- target/dif.sh@106 -- # fio /dev/fd/62 00:22:10.800 21:36:36 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:22:10.800 21:36:36 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:10.800 21:36:36 -- nvmf/common.sh@521 -- # config=() 00:22:10.800 21:36:36 -- nvmf/common.sh@521 -- # local subsystem config 00:22:10.800 21:36:36 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:10.800 21:36:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:10.800 21:36:36 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:10.801 21:36:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:10.801 { 00:22:10.801 "params": { 00:22:10.801 "name": "Nvme$subsystem", 00:22:10.801 "trtype": "$TEST_TRANSPORT", 00:22:10.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.801 "adrfam": "ipv4", 00:22:10.801 "trsvcid": "$NVMF_PORT", 00:22:10.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.801 "hdgst": ${hdgst:-false}, 00:22:10.801 "ddgst": ${ddgst:-false} 00:22:10.801 }, 00:22:10.801 "method": "bdev_nvme_attach_controller" 00:22:10.801 } 00:22:10.801 EOF 00:22:10.801 )") 00:22:10.801 21:36:36 -- target/dif.sh@82 -- # gen_fio_conf 00:22:10.801 21:36:36 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:10.801 21:36:36 -- target/dif.sh@54 -- # local file 00:22:10.801 21:36:36 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:10.801 21:36:36 -- target/dif.sh@56 -- # cat 00:22:10.801 21:36:36 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:10.801 21:36:36 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:10.801 21:36:36 -- common/autotest_common.sh@1327 -- # shift 00:22:10.801 21:36:36 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:10.801 21:36:36 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:10.801 21:36:36 -- nvmf/common.sh@543 -- # cat 00:22:10.801 21:36:36 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:10.801 21:36:36 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:10.801 21:36:36 -- target/dif.sh@72 -- # (( file <= files )) 00:22:10.801 21:36:36 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:10.801 21:36:36 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:10.801 21:36:36 -- nvmf/common.sh@545 -- # jq . 00:22:10.801 21:36:36 -- nvmf/common.sh@546 -- # IFS=, 00:22:10.801 21:36:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:10.801 "params": { 00:22:10.801 "name": "Nvme0", 00:22:10.801 "trtype": "tcp", 00:22:10.801 "traddr": "10.0.0.2", 00:22:10.801 "adrfam": "ipv4", 00:22:10.801 "trsvcid": "4420", 00:22:10.801 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:10.801 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:10.801 "hdgst": false, 00:22:10.801 "ddgst": false 00:22:10.801 }, 00:22:10.801 "method": "bdev_nvme_attach_controller" 00:22:10.801 }' 00:22:10.801 21:36:36 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:10.801 21:36:36 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:10.801 21:36:36 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:10.801 21:36:36 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:10.801 21:36:36 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:10.801 21:36:36 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:10.801 21:36:36 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:10.801 21:36:36 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:10.801 21:36:36 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:10.801 21:36:36 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:11.060 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:11.060 ... 00:22:11.060 fio-3.35 00:22:11.060 Starting 3 threads 00:22:11.060 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.619 00:22:17.619 filename0: (groupid=0, jobs=1): err= 0: pid=2694486: Wed Apr 24 21:36:42 2024 00:22:17.619 read: IOPS=195, BW=24.4MiB/s (25.6MB/s)(123MiB/5033msec) 00:22:17.619 slat (nsec): min=7321, max=34826, avg=12462.00, stdev=3273.37 00:22:17.619 clat (usec): min=6164, max=93961, avg=15337.42, stdev=13721.61 00:22:17.619 lat (usec): min=6176, max=93972, avg=15349.88, stdev=13721.48 00:22:17.619 clat percentiles (usec): 00:22:17.619 | 1.00th=[ 6587], 5.00th=[ 7111], 10.00th=[ 7635], 20.00th=[ 8848], 00:22:17.619 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10814], 60.00th=[11731], 00:22:17.619 | 70.00th=[12780], 80.00th=[13960], 90.00th=[49021], 95.00th=[52691], 00:22:17.619 | 99.00th=[55313], 99.50th=[56361], 99.90th=[93848], 99.95th=[93848], 00:22:17.619 | 99.99th=[93848] 00:22:17.619 bw ( KiB/s): min=14592, max=35584, per=36.24%, avg=25088.00, stdev=7375.29, samples=10 00:22:17.619 iops : min= 114, max= 278, avg=196.00, stdev=57.62, samples=10 00:22:17.619 lat (msec) : 10=35.81%, 20=53.61%, 50=1.22%, 100=9.36% 00:22:17.619 cpu : usr=89.84%, sys=9.64%, ctx=12, majf=0, minf=117 00:22:17.619 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:17.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.619 issued rwts: total=983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.619 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:17.619 filename0: (groupid=0, jobs=1): err= 0: pid=2694487: Wed Apr 24 21:36:42 2024 00:22:17.619 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(125MiB/5005msec) 00:22:17.619 slat (nsec): min=7296, max=35319, avg=12229.89, stdev=2995.31 00:22:17.619 clat (usec): min=5023, max=96214, avg=15041.63, stdev=14210.67 00:22:17.619 lat (usec): min=5035, max=96226, avg=15053.86, stdev=14210.67 00:22:17.619 clat percentiles (usec): 00:22:17.619 | 1.00th=[ 6325], 5.00th=[ 6783], 10.00th=[ 7111], 20.00th=[ 8717], 00:22:17.619 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10552], 60.00th=[11469], 00:22:17.619 | 70.00th=[12649], 80.00th=[13829], 90.00th=[47973], 95.00th=[52167], 00:22:17.619 | 99.00th=[56361], 99.50th=[92799], 99.90th=[95945], 99.95th=[95945], 00:22:17.619 | 99.99th=[95945] 00:22:17.619 bw ( KiB/s): min=14080, max=31744, per=36.76%, avg=25446.40, stdev=5398.57, samples=10 00:22:17.619 iops : min= 110, max= 248, avg=198.80, stdev=42.18, samples=10 00:22:17.619 lat (msec) : 10=39.42%, 20=50.55%, 50=0.90%, 100=9.13% 00:22:17.619 cpu : usr=89.75%, sys=9.63%, ctx=11, majf=0, minf=57 00:22:17.619 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:17.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.619 issued rwts: total=997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.619 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:17.619 filename0: (groupid=0, jobs=1): err= 0: pid=2694488: Wed Apr 24 21:36:42 2024 00:22:17.619 read: IOPS=148, BW=18.5MiB/s (19.4MB/s)(92.8MiB/5013msec) 00:22:17.619 slat (nsec): min=6077, max=59915, avg=11828.19, stdev=3810.90 00:22:17.619 clat (usec): min=5989, max=94366, avg=20243.99, stdev=18044.24 00:22:17.619 lat (usec): min=6000, max=94377, avg=20255.81, stdev=18044.20 00:22:17.619 clat percentiles (usec): 00:22:17.619 | 1.00th=[ 7177], 5.00th=[ 8160], 10.00th=[ 8979], 20.00th=[ 9896], 00:22:17.619 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11994], 60.00th=[13042], 00:22:17.619 | 70.00th=[14091], 80.00th=[49546], 90.00th=[52691], 95.00th=[54789], 00:22:17.619 | 99.00th=[57934], 99.50th=[93848], 99.90th=[93848], 99.95th=[93848], 00:22:17.619 | 99.99th=[93848] 00:22:17.619 bw ( KiB/s): min=13056, max=25293, per=27.32%, avg=18913.30, stdev=3708.75, samples=10 00:22:17.619 iops : min= 102, max= 197, avg=147.70, stdev=28.86, samples=10 00:22:17.619 lat (msec) : 10=20.89%, 20=58.63%, 50=0.81%, 100=19.68% 00:22:17.619 cpu : usr=91.02%, sys=8.54%, ctx=8, majf=0, minf=83 00:22:17.619 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:17.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.619 issued rwts: total=742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.619 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:17.619 00:22:17.619 Run status group 0 (all jobs): 00:22:17.619 READ: bw=67.6MiB/s (70.9MB/s), 18.5MiB/s-24.9MiB/s (19.4MB/s-26.1MB/s), io=340MiB (357MB), run=5005-5033msec 00:22:17.619 21:36:42 -- target/dif.sh@107 -- # destroy_subsystems 0 00:22:17.619 21:36:42 -- target/dif.sh@43 -- # local sub 00:22:17.619 21:36:42 -- target/dif.sh@45 -- # for sub in "$@" 00:22:17.619 21:36:42 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:17.619 21:36:42 -- target/dif.sh@36 -- # local sub_id=0 00:22:17.619 21:36:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:17.619 21:36:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.619 21:36:42 -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 21:36:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.619 21:36:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:17.619 21:36:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.619 21:36:42 -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 21:36:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.619 21:36:42 -- target/dif.sh@109 -- # NULL_DIF=2 00:22:17.619 21:36:42 -- target/dif.sh@109 -- # bs=4k 00:22:17.619 21:36:42 -- target/dif.sh@109 -- # numjobs=8 00:22:17.619 21:36:42 -- target/dif.sh@109 -- # iodepth=16 00:22:17.619 21:36:42 -- target/dif.sh@109 -- # runtime= 00:22:17.619 21:36:42 -- target/dif.sh@109 -- # files=2 00:22:17.619 21:36:42 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:22:17.619 21:36:42 -- target/dif.sh@28 -- # local sub 00:22:17.619 21:36:42 -- target/dif.sh@30 -- # for sub in "$@" 00:22:17.619 21:36:42 -- target/dif.sh@31 -- # create_subsystem 0 00:22:17.619 21:36:42 -- target/dif.sh@18 -- # local sub_id=0 00:22:17.619 21:36:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:22:17.619 21:36:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.619 21:36:42 -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 bdev_null0 00:22:17.619 21:36:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.619 21:36:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:17.619 21:36:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.619 21:36:42 -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 21:36:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.619 21:36:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:17.619 21:36:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.619 21:36:42 -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 21:36:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.619 21:36:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:17.619 21:36:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.619 21:36:42 -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 [2024-04-24 21:36:42.412827] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.619 21:36:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.619 21:36:42 -- target/dif.sh@30 -- # for sub in "$@" 00:22:17.619 21:36:42 -- target/dif.sh@31 -- # create_subsystem 1 00:22:17.619 21:36:42 -- target/dif.sh@18 -- # local sub_id=1 00:22:17.619 21:36:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:22:17.619 21:36:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.619 21:36:42 -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 bdev_null1 00:22:17.619 21:36:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.619 21:36:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:17.619 21:36:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.619 21:36:42 -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 21:36:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.619 21:36:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:17.619 21:36:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.619 21:36:42 -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 21:36:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.619 21:36:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:17.619 21:36:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.619 21:36:42 -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 21:36:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.619 21:36:42 -- target/dif.sh@30 -- # for sub in "$@" 00:22:17.619 21:36:42 -- target/dif.sh@31 -- # create_subsystem 2 00:22:17.619 21:36:42 -- target/dif.sh@18 -- # local sub_id=2 00:22:17.619 21:36:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:22:17.619 21:36:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.619 21:36:42 -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 bdev_null2 00:22:17.619 21:36:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.619 21:36:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:22:17.619 21:36:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.619 21:36:42 -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 21:36:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.619 21:36:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:22:17.619 21:36:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.619 21:36:42 -- common/autotest_common.sh@10 -- # set +x 00:22:17.619 21:36:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.620 21:36:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:17.620 21:36:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.620 21:36:42 -- common/autotest_common.sh@10 -- # set +x 00:22:17.620 21:36:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.620 21:36:42 -- target/dif.sh@112 -- # fio /dev/fd/62 00:22:17.620 21:36:42 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:22:17.620 21:36:42 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:22:17.620 21:36:42 -- nvmf/common.sh@521 -- # config=() 00:22:17.620 21:36:42 -- nvmf/common.sh@521 -- # local subsystem config 00:22:17.620 21:36:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:17.620 21:36:42 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:17.620 21:36:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:17.620 { 00:22:17.620 "params": { 00:22:17.620 "name": "Nvme$subsystem", 00:22:17.620 "trtype": "$TEST_TRANSPORT", 00:22:17.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.620 "adrfam": "ipv4", 00:22:17.620 "trsvcid": "$NVMF_PORT", 00:22:17.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.620 "hdgst": ${hdgst:-false}, 00:22:17.620 "ddgst": ${ddgst:-false} 00:22:17.620 }, 00:22:17.620 "method": "bdev_nvme_attach_controller" 00:22:17.620 } 00:22:17.620 EOF 00:22:17.620 )") 00:22:17.620 21:36:42 -- target/dif.sh@82 -- # gen_fio_conf 00:22:17.620 21:36:42 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:17.620 21:36:42 -- target/dif.sh@54 -- # local file 00:22:17.620 21:36:42 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:17.620 21:36:42 -- target/dif.sh@56 -- # cat 00:22:17.620 21:36:42 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:17.620 21:36:42 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:17.620 21:36:42 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:17.620 21:36:42 -- common/autotest_common.sh@1327 -- # shift 00:22:17.620 21:36:42 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:17.620 21:36:42 -- nvmf/common.sh@543 -- # cat 00:22:17.620 21:36:42 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:17.620 21:36:42 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:17.620 21:36:42 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:17.620 21:36:42 -- target/dif.sh@72 -- # (( file <= files )) 00:22:17.620 21:36:42 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:17.620 21:36:42 -- target/dif.sh@73 -- # cat 00:22:17.620 21:36:42 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:17.620 21:36:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:17.620 21:36:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:17.620 { 00:22:17.620 "params": { 00:22:17.620 "name": "Nvme$subsystem", 00:22:17.620 "trtype": "$TEST_TRANSPORT", 00:22:17.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.620 "adrfam": "ipv4", 00:22:17.620 "trsvcid": "$NVMF_PORT", 00:22:17.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.620 "hdgst": ${hdgst:-false}, 00:22:17.620 "ddgst": ${ddgst:-false} 00:22:17.620 }, 00:22:17.620 "method": "bdev_nvme_attach_controller" 00:22:17.620 } 00:22:17.620 EOF 00:22:17.620 )") 00:22:17.620 21:36:42 -- nvmf/common.sh@543 -- # cat 00:22:17.620 21:36:42 -- target/dif.sh@72 -- # (( file++ )) 00:22:17.620 21:36:42 -- target/dif.sh@72 -- # (( file <= files )) 00:22:17.620 21:36:42 -- target/dif.sh@73 -- # cat 00:22:17.620 21:36:42 -- target/dif.sh@72 -- # (( file++ )) 00:22:17.620 21:36:42 -- target/dif.sh@72 -- # (( file <= files )) 00:22:17.620 21:36:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:17.620 21:36:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:17.620 { 00:22:17.620 "params": { 00:22:17.620 "name": "Nvme$subsystem", 00:22:17.620 "trtype": "$TEST_TRANSPORT", 00:22:17.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.620 "adrfam": "ipv4", 00:22:17.620 "trsvcid": "$NVMF_PORT", 00:22:17.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.620 "hdgst": ${hdgst:-false}, 00:22:17.620 "ddgst": ${ddgst:-false} 00:22:17.620 }, 00:22:17.620 "method": "bdev_nvme_attach_controller" 00:22:17.620 } 00:22:17.620 EOF 00:22:17.620 )") 00:22:17.620 21:36:42 -- nvmf/common.sh@543 -- # cat 00:22:17.620 21:36:42 -- nvmf/common.sh@545 -- # jq . 00:22:17.620 21:36:42 -- nvmf/common.sh@546 -- # IFS=, 00:22:17.620 21:36:42 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:17.620 "params": { 00:22:17.620 "name": "Nvme0", 00:22:17.620 "trtype": "tcp", 00:22:17.620 "traddr": "10.0.0.2", 00:22:17.620 "adrfam": "ipv4", 00:22:17.620 "trsvcid": "4420", 00:22:17.620 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:17.620 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:17.620 "hdgst": false, 00:22:17.620 "ddgst": false 00:22:17.620 }, 00:22:17.620 "method": "bdev_nvme_attach_controller" 00:22:17.620 },{ 00:22:17.620 "params": { 00:22:17.620 "name": "Nvme1", 00:22:17.620 "trtype": "tcp", 00:22:17.620 "traddr": "10.0.0.2", 00:22:17.620 "adrfam": "ipv4", 00:22:17.620 "trsvcid": "4420", 00:22:17.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:17.620 "hdgst": false, 00:22:17.620 "ddgst": false 00:22:17.620 }, 00:22:17.620 "method": "bdev_nvme_attach_controller" 00:22:17.620 },{ 00:22:17.620 "params": { 00:22:17.620 "name": "Nvme2", 00:22:17.620 "trtype": "tcp", 00:22:17.620 "traddr": "10.0.0.2", 00:22:17.620 "adrfam": "ipv4", 00:22:17.620 "trsvcid": "4420", 00:22:17.620 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:17.620 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:17.620 "hdgst": false, 00:22:17.620 "ddgst": false 00:22:17.620 }, 00:22:17.620 "method": "bdev_nvme_attach_controller" 00:22:17.620 }' 00:22:17.620 21:36:42 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:17.620 21:36:42 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:17.620 21:36:42 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:17.620 21:36:42 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:17.620 21:36:42 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:17.620 21:36:42 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:17.620 21:36:42 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:17.620 21:36:42 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:17.620 21:36:42 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:17.620 21:36:42 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:17.620 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:17.620 ... 00:22:17.620 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:17.620 ... 00:22:17.620 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:17.620 ... 00:22:17.620 fio-3.35 00:22:17.620 Starting 24 threads 00:22:17.620 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.828 00:22:29.828 filename0: (groupid=0, jobs=1): err= 0: pid=2695346: Wed Apr 24 21:36:53 2024 00:22:29.828 read: IOPS=426, BW=1706KiB/s (1746kB/s)(16.7MiB/10005msec) 00:22:29.828 slat (usec): min=8, max=516, avg=33.81, stdev=17.38 00:22:29.828 clat (msec): min=12, max=239, avg=37.23, stdev=24.63 00:22:29.828 lat (msec): min=12, max=239, avg=37.27, stdev=24.63 00:22:29.828 clat percentiles (msec): 00:22:29.829 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:22:29.829 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.829 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 40], 00:22:29.829 | 99.00th=[ 192], 99.50th=[ 199], 99.90th=[ 239], 99.95th=[ 239], 00:22:29.829 | 99.99th=[ 241] 00:22:29.829 bw ( KiB/s): min= 256, max= 1984, per=4.14%, avg=1695.16, stdev=527.23, samples=19 00:22:29.829 iops : min= 64, max= 496, avg=423.79, stdev=131.81, samples=19 00:22:29.829 lat (msec) : 20=0.28%, 50=96.86%, 100=0.23%, 250=2.63% 00:22:29.829 cpu : usr=87.70%, sys=5.49%, ctx=290, majf=0, minf=10 00:22:29.829 IO depths : 1=4.9%, 2=10.5%, 4=23.0%, 8=53.9%, 16=7.7%, 32=0.0%, >=64=0.0% 00:22:29.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.829 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.829 issued rwts: total=4266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.829 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.829 filename0: (groupid=0, jobs=1): err= 0: pid=2695347: Wed Apr 24 21:36:53 2024 00:22:29.829 read: IOPS=426, BW=1707KiB/s (1748kB/s)(16.7MiB/10011msec) 00:22:29.829 slat (usec): min=8, max=1091, avg=47.04, stdev=32.97 00:22:29.829 clat (msec): min=15, max=256, avg=37.06, stdev=25.51 00:22:29.829 lat (msec): min=15, max=256, avg=37.10, stdev=25.51 00:22:29.829 clat percentiles (msec): 00:22:29.829 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:22:29.829 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.829 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:22:29.829 | 99.00th=[ 199], 99.50th=[ 247], 99.90th=[ 257], 99.95th=[ 257], 00:22:29.829 | 99.99th=[ 257] 00:22:29.829 bw ( KiB/s): min= 240, max= 1920, per=4.13%, avg=1690.95, stdev=535.62, samples=19 00:22:29.829 iops : min= 60, max= 480, avg=422.74, stdev=133.91, samples=19 00:22:29.829 lat (msec) : 20=0.37%, 50=97.05%, 100=0.33%, 250=1.87%, 500=0.37% 00:22:29.829 cpu : usr=92.02%, sys=3.79%, ctx=175, majf=0, minf=9 00:22:29.829 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:22:29.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.829 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.829 issued rwts: total=4272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.829 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.829 filename0: (groupid=0, jobs=1): err= 0: pid=2695348: Wed Apr 24 21:36:53 2024 00:22:29.829 read: IOPS=429, BW=1719KiB/s (1761kB/s)(16.8MiB/10018msec) 00:22:29.829 slat (usec): min=8, max=155, avg=17.32, stdev=11.06 00:22:29.829 clat (msec): min=18, max=225, avg=37.08, stdev=20.18 00:22:29.829 lat (msec): min=18, max=225, avg=37.10, stdev=20.18 00:22:29.829 clat percentiles (msec): 00:22:29.829 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:22:29.829 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.829 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 37], 00:22:29.829 | 99.00th=[ 165], 99.50th=[ 184], 99.90th=[ 197], 99.95th=[ 197], 00:22:29.829 | 99.99th=[ 226] 00:22:29.829 bw ( KiB/s): min= 368, max= 1968, per=4.19%, avg=1716.00, stdev=486.52, samples=20 00:22:29.829 iops : min= 92, max= 492, avg=429.00, stdev=121.63, samples=20 00:22:29.829 lat (msec) : 20=0.09%, 50=96.42%, 100=0.51%, 250=2.97% 00:22:29.829 cpu : usr=96.38%, sys=2.51%, ctx=51, majf=0, minf=9 00:22:29.829 IO depths : 1=5.2%, 2=10.8%, 4=22.9%, 8=53.6%, 16=7.5%, 32=0.0%, >=64=0.0% 00:22:29.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.829 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.829 issued rwts: total=4306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.829 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.829 filename0: (groupid=0, jobs=1): err= 0: pid=2695349: Wed Apr 24 21:36:53 2024 00:22:29.829 read: IOPS=433, BW=1732KiB/s (1774kB/s)(16.9MiB/10013msec) 00:22:29.829 slat (usec): min=8, max=118, avg=36.11, stdev=18.48 00:22:29.829 clat (msec): min=12, max=243, avg=36.63, stdev=21.36 00:22:29.829 lat (msec): min=12, max=243, avg=36.66, stdev=21.37 00:22:29.829 clat percentiles (msec): 00:22:29.829 | 1.00th=[ 22], 5.00th=[ 29], 10.00th=[ 33], 20.00th=[ 33], 00:22:29.829 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.829 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 40], 00:22:29.829 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 192], 99.95th=[ 192], 00:22:29.829 | 99.99th=[ 243] 00:22:29.829 bw ( KiB/s): min= 384, max= 2064, per=4.23%, avg=1733.60, stdev=495.69, samples=20 00:22:29.829 iops : min= 96, max= 516, avg=433.40, stdev=123.92, samples=20 00:22:29.829 lat (msec) : 20=0.18%, 50=95.94%, 100=1.25%, 250=2.63% 00:22:29.829 cpu : usr=97.29%, sys=1.85%, ctx=135, majf=0, minf=9 00:22:29.829 IO depths : 1=4.4%, 2=10.3%, 4=23.7%, 8=53.4%, 16=8.2%, 32=0.0%, >=64=0.0% 00:22:29.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.829 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.829 issued rwts: total=4336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.829 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.829 filename0: (groupid=0, jobs=1): err= 0: pid=2695350: Wed Apr 24 21:36:53 2024 00:22:29.829 read: IOPS=426, BW=1708KiB/s (1749kB/s)(16.7MiB/10005msec) 00:22:29.829 slat (usec): min=9, max=1346, avg=36.39, stdev=25.49 00:22:29.829 clat (msec): min=6, max=261, avg=37.17, stdev=24.97 00:22:29.829 lat (msec): min=6, max=261, avg=37.21, stdev=24.97 00:22:29.829 clat percentiles (msec): 00:22:29.829 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:22:29.829 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.829 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:22:29.829 | 99.00th=[ 192], 99.50th=[ 207], 99.90th=[ 239], 99.95th=[ 239], 00:22:29.829 | 99.99th=[ 262] 00:22:29.829 bw ( KiB/s): min= 256, max= 1976, per=4.15%, avg=1697.68, stdev=528.02, samples=19 00:22:29.829 iops : min= 64, max= 494, avg=424.42, stdev=132.00, samples=19 00:22:29.829 lat (msec) : 10=0.30%, 20=0.28%, 50=96.40%, 100=0.73%, 250=2.25% 00:22:29.829 lat (msec) : 500=0.05% 00:22:29.829 cpu : usr=95.08%, sys=2.72%, ctx=180, majf=0, minf=9 00:22:29.829 IO depths : 1=5.5%, 2=11.1%, 4=23.8%, 8=52.5%, 16=7.2%, 32=0.0%, >=64=0.0% 00:22:29.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.829 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.829 issued rwts: total=4272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.829 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.829 filename0: (groupid=0, jobs=1): err= 0: pid=2695351: Wed Apr 24 21:36:53 2024 00:22:29.829 read: IOPS=425, BW=1701KiB/s (1742kB/s)(16.6MiB/10006msec) 00:22:29.829 slat (usec): min=11, max=115, avg=55.29, stdev=20.92 00:22:29.829 clat (msec): min=18, max=239, avg=37.16, stdev=24.41 00:22:29.829 lat (msec): min=18, max=239, avg=37.21, stdev=24.41 00:22:29.829 clat percentiles (msec): 00:22:29.829 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:22:29.829 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.829 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:22:29.829 | 99.00th=[ 190], 99.50th=[ 192], 99.90th=[ 239], 99.95th=[ 239], 00:22:29.829 | 99.99th=[ 239] 00:22:29.829 bw ( KiB/s): min= 256, max= 1920, per=4.13%, avg=1690.95, stdev=526.63, samples=19 00:22:29.829 iops : min= 64, max= 480, avg=422.74, stdev=131.66, samples=19 00:22:29.829 lat (msec) : 20=0.05%, 50=97.32%, 100=0.05%, 250=2.58% 00:22:29.829 cpu : usr=97.03%, sys=2.03%, ctx=225, majf=0, minf=9 00:22:29.829 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:22:29.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.829 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.829 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.829 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.829 filename0: (groupid=0, jobs=1): err= 0: pid=2695352: Wed Apr 24 21:36:53 2024 00:22:29.829 read: IOPS=424, BW=1700KiB/s (1741kB/s)(16.6MiB/10019msec) 00:22:29.829 slat (usec): min=5, max=1277, avg=58.72, stdev=43.61 00:22:29.829 clat (msec): min=16, max=266, avg=37.34, stdev=25.98 00:22:29.829 lat (msec): min=16, max=266, avg=37.39, stdev=25.98 00:22:29.829 clat percentiles (msec): 00:22:29.829 | 1.00th=[ 21], 5.00th=[ 30], 10.00th=[ 33], 20.00th=[ 33], 00:22:29.829 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.829 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 44], 00:22:29.829 | 99.00th=[ 199], 99.50th=[ 203], 99.90th=[ 268], 99.95th=[ 268], 00:22:29.829 | 99.99th=[ 268] 00:22:29.829 bw ( KiB/s): min= 240, max= 1968, per=4.12%, avg=1685.05, stdev=534.51, samples=19 00:22:29.829 iops : min= 60, max= 492, avg=421.26, stdev=133.63, samples=19 00:22:29.829 lat (msec) : 20=0.54%, 50=95.37%, 100=1.83%, 250=1.88%, 500=0.38% 00:22:29.829 cpu : usr=91.34%, sys=3.94%, ctx=226, majf=0, minf=9 00:22:29.829 IO depths : 1=1.2%, 2=3.6%, 4=10.4%, 8=70.4%, 16=14.4%, 32=0.0%, >=64=0.0% 00:22:29.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.829 complete : 0=0.0%, 4=91.2%, 8=6.1%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.829 issued rwts: total=4258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.829 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.829 filename0: (groupid=0, jobs=1): err= 0: pid=2695353: Wed Apr 24 21:36:53 2024 00:22:29.829 read: IOPS=426, BW=1706KiB/s (1747kB/s)(16.7MiB/10007msec) 00:22:29.829 slat (nsec): min=8263, max=76951, avg=25134.54, stdev=10091.70 00:22:29.829 clat (msec): min=7, max=252, avg=37.34, stdev=25.43 00:22:29.829 lat (msec): min=7, max=252, avg=37.36, stdev=25.43 00:22:29.829 clat percentiles (msec): 00:22:29.829 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:22:29.829 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.829 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 36], 00:22:29.829 | 99.00th=[ 201], 99.50th=[ 247], 99.90th=[ 253], 99.95th=[ 253], 00:22:29.829 | 99.99th=[ 253] 00:22:29.830 bw ( KiB/s): min= 240, max= 2016, per=4.16%, avg=1703.20, stdev=524.77, samples=20 00:22:29.830 iops : min= 60, max= 504, avg=425.80, stdev=131.19, samples=20 00:22:29.830 lat (msec) : 10=0.14%, 20=0.23%, 50=96.84%, 100=0.54%, 250=1.83% 00:22:29.830 lat (msec) : 500=0.42% 00:22:29.830 cpu : usr=97.70%, sys=1.84%, ctx=25, majf=0, minf=9 00:22:29.830 IO depths : 1=1.7%, 2=5.2%, 4=16.0%, 8=64.4%, 16=12.7%, 32=0.0%, >=64=0.0% 00:22:29.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.830 complete : 0=0.0%, 4=92.6%, 8=3.6%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.830 issued rwts: total=4268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.830 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.830 filename1: (groupid=0, jobs=1): err= 0: pid=2695354: Wed Apr 24 21:36:53 2024 00:22:29.830 read: IOPS=423, BW=1695KiB/s (1735kB/s)(16.6MiB/10017msec) 00:22:29.830 slat (usec): min=8, max=1170, avg=40.32, stdev=35.88 00:22:29.830 clat (msec): min=14, max=269, avg=37.47, stdev=24.84 00:22:29.830 lat (msec): min=14, max=269, avg=37.51, stdev=24.84 00:22:29.830 clat percentiles (msec): 00:22:29.830 | 1.00th=[ 21], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:22:29.830 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.830 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 44], 00:22:29.830 | 99.00th=[ 192], 99.50th=[ 199], 99.90th=[ 241], 99.95th=[ 241], 00:22:29.830 | 99.99th=[ 271] 00:22:29.830 bw ( KiB/s): min= 256, max= 2000, per=4.13%, avg=1691.20, stdev=510.82, samples=20 00:22:29.830 iops : min= 64, max= 500, avg=422.80, stdev=127.70, samples=20 00:22:29.830 lat (msec) : 20=0.85%, 50=95.22%, 100=1.30%, 250=2.59%, 500=0.05% 00:22:29.830 cpu : usr=94.24%, sys=2.90%, ctx=89, majf=0, minf=9 00:22:29.830 IO depths : 1=1.5%, 2=6.8%, 4=21.8%, 8=58.5%, 16=11.4%, 32=0.0%, >=64=0.0% 00:22:29.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.830 complete : 0=0.0%, 4=93.6%, 8=1.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.830 issued rwts: total=4244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.830 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.830 filename1: (groupid=0, jobs=1): err= 0: pid=2695355: Wed Apr 24 21:36:53 2024 00:22:29.830 read: IOPS=430, BW=1721KiB/s (1762kB/s)(16.8MiB/10017msec) 00:22:29.830 slat (usec): min=8, max=101, avg=31.18, stdev=13.62 00:22:29.830 clat (msec): min=16, max=238, avg=36.92, stdev=23.94 00:22:29.830 lat (msec): min=16, max=238, avg=36.95, stdev=23.94 00:22:29.830 clat percentiles (msec): 00:22:29.830 | 1.00th=[ 21], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:22:29.830 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.830 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:22:29.830 | 99.00th=[ 190], 99.50th=[ 192], 99.90th=[ 207], 99.95th=[ 207], 00:22:29.830 | 99.99th=[ 239] 00:22:29.830 bw ( KiB/s): min= 256, max= 2096, per=4.19%, avg=1717.60, stdev=521.11, samples=20 00:22:29.830 iops : min= 64, max= 524, avg=429.40, stdev=130.28, samples=20 00:22:29.830 lat (msec) : 20=0.30%, 50=97.05%, 100=0.05%, 250=2.60% 00:22:29.830 cpu : usr=97.72%, sys=1.70%, ctx=88, majf=0, minf=9 00:22:29.830 IO depths : 1=4.1%, 2=10.1%, 4=24.1%, 8=53.2%, 16=8.4%, 32=0.0%, >=64=0.0% 00:22:29.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.830 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.830 issued rwts: total=4310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.830 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.830 filename1: (groupid=0, jobs=1): err= 0: pid=2695356: Wed Apr 24 21:36:53 2024 00:22:29.830 read: IOPS=427, BW=1709KiB/s (1750kB/s)(16.7MiB/10027msec) 00:22:29.830 slat (usec): min=5, max=336, avg=23.75, stdev=13.78 00:22:29.830 clat (msec): min=16, max=222, avg=37.26, stdev=22.71 00:22:29.830 lat (msec): min=16, max=222, avg=37.28, stdev=22.71 00:22:29.830 clat percentiles (msec): 00:22:29.830 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:22:29.830 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.830 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 43], 00:22:29.830 | 99.00th=[ 188], 99.50th=[ 190], 99.90th=[ 194], 99.95th=[ 194], 00:22:29.830 | 99.99th=[ 222] 00:22:29.830 bw ( KiB/s): min= 368, max= 1920, per=4.17%, avg=1707.20, stdev=494.52, samples=20 00:22:29.830 iops : min= 92, max= 480, avg=426.80, stdev=123.63, samples=20 00:22:29.830 lat (msec) : 20=0.56%, 50=96.01%, 100=0.44%, 250=2.99% 00:22:29.830 cpu : usr=92.22%, sys=3.71%, ctx=98, majf=0, minf=9 00:22:29.830 IO depths : 1=3.9%, 2=9.4%, 4=22.8%, 8=55.2%, 16=8.7%, 32=0.0%, >=64=0.0% 00:22:29.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.830 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.830 issued rwts: total=4284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.830 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.830 filename1: (groupid=0, jobs=1): err= 0: pid=2695357: Wed Apr 24 21:36:53 2024 00:22:29.830 read: IOPS=424, BW=1699KiB/s (1739kB/s)(16.6MiB/10013msec) 00:22:29.830 slat (usec): min=12, max=174, avg=58.50, stdev=21.01 00:22:29.830 clat (msec): min=15, max=238, avg=37.34, stdev=24.64 00:22:29.830 lat (msec): min=15, max=239, avg=37.40, stdev=24.63 00:22:29.830 clat percentiles (msec): 00:22:29.830 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:22:29.830 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.830 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 40], 00:22:29.830 | 99.00th=[ 192], 99.50th=[ 199], 99.90th=[ 239], 99.95th=[ 239], 00:22:29.830 | 99.99th=[ 239] 00:22:29.830 bw ( KiB/s): min= 256, max= 2048, per=4.14%, avg=1696.80, stdev=514.27, samples=20 00:22:29.830 iops : min= 64, max= 512, avg=424.20, stdev=128.57, samples=20 00:22:29.830 lat (msec) : 20=0.35%, 50=96.40%, 100=0.61%, 250=2.63% 00:22:29.830 cpu : usr=93.28%, sys=2.88%, ctx=42, majf=0, minf=9 00:22:29.830 IO depths : 1=2.2%, 2=5.2%, 4=12.2%, 8=67.2%, 16=13.2%, 32=0.0%, >=64=0.0% 00:22:29.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.830 complete : 0=0.0%, 4=91.5%, 8=5.5%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.830 issued rwts: total=4252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.830 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.830 filename1: (groupid=0, jobs=1): err= 0: pid=2695358: Wed Apr 24 21:36:53 2024 00:22:29.830 read: IOPS=429, BW=1716KiB/s (1757kB/s)(16.8MiB/10009msec) 00:22:29.830 slat (usec): min=8, max=1092, avg=34.40, stdev=25.29 00:22:29.830 clat (msec): min=13, max=254, avg=36.96, stdev=25.18 00:22:29.830 lat (msec): min=13, max=254, avg=36.99, stdev=25.18 00:22:29.830 clat percentiles (msec): 00:22:29.830 | 1.00th=[ 21], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:22:29.830 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.830 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:22:29.830 | 99.00th=[ 199], 99.50th=[ 203], 99.90th=[ 255], 99.95th=[ 255], 00:22:29.830 | 99.99th=[ 255] 00:22:29.830 bw ( KiB/s): min= 256, max= 2096, per=4.15%, avg=1700.21, stdev=541.09, samples=19 00:22:29.830 iops : min= 64, max= 524, avg=425.05, stdev=135.27, samples=19 00:22:29.830 lat (msec) : 20=0.28%, 50=96.69%, 100=0.79%, 250=1.86%, 500=0.37% 00:22:29.830 cpu : usr=87.33%, sys=5.55%, ctx=113, majf=0, minf=9 00:22:29.830 IO depths : 1=4.2%, 2=9.9%, 4=23.4%, 8=53.9%, 16=8.5%, 32=0.0%, >=64=0.0% 00:22:29.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.830 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.830 issued rwts: total=4294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.830 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.830 filename1: (groupid=0, jobs=1): err= 0: pid=2695359: Wed Apr 24 21:36:53 2024 00:22:29.830 read: IOPS=425, BW=1700KiB/s (1741kB/s)(16.6MiB/10007msec) 00:22:29.830 slat (usec): min=8, max=103, avg=22.78, stdev=14.38 00:22:29.830 clat (msec): min=10, max=252, avg=37.46, stdev=25.22 00:22:29.830 lat (msec): min=10, max=252, avg=37.48, stdev=25.23 00:22:29.830 clat percentiles (msec): 00:22:29.830 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:22:29.830 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.830 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 37], 00:22:29.830 | 99.00th=[ 199], 99.50th=[ 203], 99.90th=[ 253], 99.95th=[ 253], 00:22:29.830 | 99.99th=[ 253] 00:22:29.830 bw ( KiB/s): min= 256, max= 1936, per=4.11%, avg=1683.37, stdev=532.82, samples=19 00:22:29.830 iops : min= 64, max= 484, avg=420.84, stdev=133.20, samples=19 00:22:29.830 lat (msec) : 20=0.78%, 50=95.75%, 100=1.22%, 250=1.88%, 500=0.38% 00:22:29.830 cpu : usr=97.84%, sys=1.66%, ctx=57, majf=0, minf=9 00:22:29.830 IO depths : 1=0.3%, 2=5.9%, 4=23.4%, 8=57.9%, 16=12.6%, 32=0.0%, >=64=0.0% 00:22:29.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.830 complete : 0=0.0%, 4=94.1%, 8=0.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.830 issued rwts: total=4254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.830 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.830 filename1: (groupid=0, jobs=1): err= 0: pid=2695360: Wed Apr 24 21:36:53 2024 00:22:29.830 read: IOPS=433, BW=1735KiB/s (1776kB/s)(17.0MiB/10013msec) 00:22:29.830 slat (usec): min=4, max=333, avg=21.03, stdev=10.67 00:22:29.830 clat (msec): min=13, max=215, avg=36.73, stdev=20.00 00:22:29.830 lat (msec): min=13, max=215, avg=36.75, stdev=20.00 00:22:29.830 clat percentiles (msec): 00:22:29.830 | 1.00th=[ 22], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 34], 00:22:29.830 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.830 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 36], 00:22:29.830 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 184], 99.95th=[ 184], 00:22:29.830 | 99.99th=[ 215] 00:22:29.830 bw ( KiB/s): min= 368, max= 2048, per=4.23%, avg=1730.40, stdev=471.11, samples=20 00:22:29.830 iops : min= 92, max= 512, avg=432.60, stdev=117.78, samples=20 00:22:29.830 lat (msec) : 20=0.41%, 50=95.99%, 100=0.28%, 250=3.32% 00:22:29.830 cpu : usr=98.10%, sys=1.54%, ctx=16, majf=0, minf=9 00:22:29.830 IO depths : 1=5.5%, 2=11.3%, 4=23.3%, 8=52.6%, 16=7.2%, 32=0.0%, >=64=0.0% 00:22:29.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.830 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.830 issued rwts: total=4342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.830 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.831 filename1: (groupid=0, jobs=1): err= 0: pid=2695361: Wed Apr 24 21:36:53 2024 00:22:29.831 read: IOPS=426, BW=1707KiB/s (1748kB/s)(16.7MiB/10007msec) 00:22:29.831 slat (usec): min=8, max=119, avg=46.01, stdev=23.56 00:22:29.831 clat (msec): min=10, max=252, avg=37.16, stdev=25.63 00:22:29.831 lat (msec): min=10, max=252, avg=37.20, stdev=25.63 00:22:29.831 clat percentiles (msec): 00:22:29.831 | 1.00th=[ 17], 5.00th=[ 29], 10.00th=[ 33], 20.00th=[ 33], 00:22:29.831 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.831 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 47], 00:22:29.831 | 99.00th=[ 199], 99.50th=[ 203], 99.90th=[ 253], 99.95th=[ 253], 00:22:29.831 | 99.99th=[ 253] 00:22:29.831 bw ( KiB/s): min= 256, max= 2016, per=4.13%, avg=1690.11, stdev=536.00, samples=19 00:22:29.831 iops : min= 64, max= 504, avg=422.53, stdev=134.00, samples=19 00:22:29.831 lat (msec) : 20=2.30%, 50=93.96%, 100=1.50%, 250=1.83%, 500=0.42% 00:22:29.831 cpu : usr=97.78%, sys=1.59%, ctx=111, majf=0, minf=9 00:22:29.831 IO depths : 1=4.2%, 2=9.3%, 4=21.0%, 8=56.5%, 16=8.9%, 32=0.0%, >=64=0.0% 00:22:29.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.831 complete : 0=0.0%, 4=93.3%, 8=1.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.831 issued rwts: total=4270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.831 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.831 filename2: (groupid=0, jobs=1): err= 0: pid=2695362: Wed Apr 24 21:36:53 2024 00:22:29.831 read: IOPS=424, BW=1697KiB/s (1737kB/s)(16.6MiB/10011msec) 00:22:29.831 slat (usec): min=8, max=133, avg=37.01, stdev=26.31 00:22:29.831 clat (msec): min=16, max=254, avg=37.42, stdev=24.80 00:22:29.831 lat (msec): min=16, max=254, avg=37.45, stdev=24.80 00:22:29.831 clat percentiles (msec): 00:22:29.831 | 1.00th=[ 18], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:22:29.831 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.831 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 45], 00:22:29.831 | 99.00th=[ 192], 99.50th=[ 199], 99.90th=[ 234], 99.95th=[ 234], 00:22:29.831 | 99.99th=[ 255] 00:22:29.831 bw ( KiB/s): min= 256, max= 1936, per=4.10%, avg=1680.00, stdev=521.22, samples=19 00:22:29.831 iops : min= 64, max= 484, avg=420.00, stdev=130.31, samples=19 00:22:29.831 lat (msec) : 20=1.88%, 50=94.42%, 100=1.11%, 250=2.54%, 500=0.05% 00:22:29.831 cpu : usr=94.27%, sys=2.90%, ctx=159, majf=0, minf=9 00:22:29.831 IO depths : 1=2.0%, 2=7.7%, 4=23.2%, 8=56.5%, 16=10.6%, 32=0.0%, >=64=0.0% 00:22:29.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.831 complete : 0=0.0%, 4=94.0%, 8=0.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.831 issued rwts: total=4246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.831 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.831 filename2: (groupid=0, jobs=1): err= 0: pid=2695363: Wed Apr 24 21:36:53 2024 00:22:29.831 read: IOPS=430, BW=1723KiB/s (1765kB/s)(16.9MiB/10027msec) 00:22:29.831 slat (usec): min=4, max=179, avg=16.63, stdev= 9.79 00:22:29.831 clat (msec): min=14, max=187, avg=37.00, stdev=21.43 00:22:29.831 lat (msec): min=14, max=187, avg=37.02, stdev=21.43 00:22:29.831 clat percentiles (msec): 00:22:29.831 | 1.00th=[ 16], 5.00th=[ 18], 10.00th=[ 33], 20.00th=[ 33], 00:22:29.831 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.831 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 52], 00:22:29.831 | 99.00th=[ 167], 99.50th=[ 184], 99.90th=[ 188], 99.95th=[ 188], 00:22:29.831 | 99.99th=[ 188] 00:22:29.831 bw ( KiB/s): min= 384, max= 1992, per=4.20%, avg=1721.60, stdev=490.05, samples=20 00:22:29.831 iops : min= 96, max= 498, avg=430.40, stdev=122.51, samples=20 00:22:29.831 lat (msec) : 20=6.11%, 50=86.16%, 100=4.44%, 250=3.29% 00:22:29.831 cpu : usr=95.62%, sys=2.45%, ctx=139, majf=0, minf=9 00:22:29.831 IO depths : 1=4.3%, 2=9.7%, 4=21.8%, 8=55.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:22:29.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.831 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.831 issued rwts: total=4320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.831 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.831 filename2: (groupid=0, jobs=1): err= 0: pid=2695364: Wed Apr 24 21:36:53 2024 00:22:29.831 read: IOPS=430, BW=1721KiB/s (1762kB/s)(16.8MiB/10008msec) 00:22:29.831 slat (usec): min=8, max=1375, avg=29.35, stdev=25.49 00:22:29.831 clat (msec): min=10, max=295, avg=36.94, stdev=25.49 00:22:29.831 lat (msec): min=10, max=295, avg=36.97, stdev=25.49 00:22:29.831 clat percentiles (msec): 00:22:29.831 | 1.00th=[ 19], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:22:29.831 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.831 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 36], 00:22:29.831 | 99.00th=[ 199], 99.50th=[ 203], 99.90th=[ 262], 99.95th=[ 264], 00:22:29.831 | 99.99th=[ 296] 00:22:29.831 bw ( KiB/s): min= 256, max= 2128, per=4.17%, avg=1705.26, stdev=544.66, samples=19 00:22:29.831 iops : min= 64, max= 532, avg=426.32, stdev=136.17, samples=19 00:22:29.831 lat (msec) : 20=1.39%, 50=95.63%, 100=0.74%, 250=1.76%, 500=0.46% 00:22:29.831 cpu : usr=96.30%, sys=2.19%, ctx=190, majf=0, minf=9 00:22:29.831 IO depths : 1=4.4%, 2=10.2%, 4=23.6%, 8=53.6%, 16=8.2%, 32=0.0%, >=64=0.0% 00:22:29.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.831 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.831 issued rwts: total=4306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.831 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.831 filename2: (groupid=0, jobs=1): err= 0: pid=2695365: Wed Apr 24 21:36:53 2024 00:22:29.831 read: IOPS=426, BW=1708KiB/s (1749kB/s)(16.7MiB/10005msec) 00:22:29.831 slat (usec): min=8, max=138, avg=38.39, stdev=17.14 00:22:29.831 clat (msec): min=23, max=253, avg=37.13, stdev=22.79 00:22:29.831 lat (msec): min=23, max=253, avg=37.17, stdev=22.79 00:22:29.831 clat percentiles (msec): 00:22:29.831 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:22:29.831 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.831 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 36], 00:22:29.831 | 99.00th=[ 192], 99.50th=[ 199], 99.90th=[ 201], 99.95th=[ 201], 00:22:29.831 | 99.99th=[ 255] 00:22:29.831 bw ( KiB/s): min= 384, max= 1920, per=4.15%, avg=1697.68, stdev=506.35, samples=19 00:22:29.831 iops : min= 96, max= 480, avg=424.42, stdev=126.59, samples=19 00:22:29.831 lat (msec) : 50=96.96%, 100=0.47%, 250=2.53%, 500=0.05% 00:22:29.831 cpu : usr=95.76%, sys=2.49%, ctx=154, majf=0, minf=9 00:22:29.831 IO depths : 1=5.5%, 2=11.7%, 4=24.8%, 8=51.0%, 16=7.0%, 32=0.0%, >=64=0.0% 00:22:29.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.831 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.831 issued rwts: total=4272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.831 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.831 filename2: (groupid=0, jobs=1): err= 0: pid=2695366: Wed Apr 24 21:36:53 2024 00:22:29.831 read: IOPS=425, BW=1701KiB/s (1742kB/s)(16.6MiB/10007msec) 00:22:29.831 slat (nsec): min=3953, max=60237, avg=24561.41, stdev=10435.63 00:22:29.831 clat (msec): min=20, max=403, avg=37.38, stdev=29.79 00:22:29.831 lat (msec): min=20, max=403, avg=37.41, stdev=29.79 00:22:29.831 clat percentiles (msec): 00:22:29.831 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:22:29.831 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.831 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:22:29.831 | 99.00th=[ 192], 99.50th=[ 199], 99.90th=[ 405], 99.95th=[ 405], 00:22:29.831 | 99.99th=[ 405] 00:22:29.831 bw ( KiB/s): min= 128, max= 1920, per=4.11%, avg=1684.21, stdev=546.24, samples=19 00:22:29.831 iops : min= 32, max= 480, avg=421.05, stdev=136.56, samples=19 00:22:29.831 lat (msec) : 50=97.70%, 100=0.05%, 250=1.83%, 500=0.42% 00:22:29.831 cpu : usr=97.96%, sys=1.55%, ctx=16, majf=0, minf=9 00:22:29.831 IO depths : 1=5.4%, 2=11.6%, 4=24.7%, 8=51.2%, 16=7.1%, 32=0.0%, >=64=0.0% 00:22:29.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.831 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.831 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.831 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.831 filename2: (groupid=0, jobs=1): err= 0: pid=2695367: Wed Apr 24 21:36:53 2024 00:22:29.831 read: IOPS=425, BW=1700KiB/s (1741kB/s)(16.6MiB/10013msec) 00:22:29.831 slat (usec): min=8, max=308, avg=39.20, stdev=19.60 00:22:29.831 clat (msec): min=16, max=238, avg=37.30, stdev=24.97 00:22:29.831 lat (msec): min=16, max=238, avg=37.34, stdev=24.96 00:22:29.831 clat percentiles (msec): 00:22:29.831 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:22:29.831 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.831 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 37], 00:22:29.831 | 99.00th=[ 192], 99.50th=[ 207], 99.90th=[ 239], 99.95th=[ 239], 00:22:29.831 | 99.99th=[ 239] 00:22:29.831 bw ( KiB/s): min= 256, max= 1936, per=4.16%, avg=1701.60, stdev=514.63, samples=20 00:22:29.831 iops : min= 64, max= 484, avg=425.40, stdev=128.66, samples=20 00:22:29.831 lat (msec) : 20=0.09%, 50=97.09%, 100=0.56%, 250=2.26% 00:22:29.831 cpu : usr=92.34%, sys=3.77%, ctx=169, majf=0, minf=9 00:22:29.831 IO depths : 1=2.3%, 2=8.5%, 4=25.0%, 8=54.0%, 16=10.2%, 32=0.0%, >=64=0.0% 00:22:29.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.831 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.831 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.831 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.831 filename2: (groupid=0, jobs=1): err= 0: pid=2695368: Wed Apr 24 21:36:53 2024 00:22:29.831 read: IOPS=425, BW=1702KiB/s (1742kB/s)(16.6MiB/10005msec) 00:22:29.831 slat (usec): min=7, max=289, avg=32.21, stdev=12.66 00:22:29.831 clat (msec): min=25, max=250, avg=37.32, stdev=24.63 00:22:29.831 lat (msec): min=25, max=250, avg=37.35, stdev=24.63 00:22:29.831 clat percentiles (msec): 00:22:29.831 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:22:29.831 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.831 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:22:29.831 | 99.00th=[ 192], 99.50th=[ 199], 99.90th=[ 241], 99.95th=[ 241], 00:22:29.831 | 99.99th=[ 251] 00:22:29.832 bw ( KiB/s): min= 256, max= 1920, per=4.13%, avg=1690.95, stdev=526.41, samples=19 00:22:29.832 iops : min= 64, max= 480, avg=422.74, stdev=131.60, samples=19 00:22:29.832 lat (msec) : 50=97.37%, 250=2.58%, 500=0.05% 00:22:29.832 cpu : usr=96.55%, sys=2.02%, ctx=29, majf=0, minf=9 00:22:29.832 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:22:29.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.832 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.832 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.832 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.832 filename2: (groupid=0, jobs=1): err= 0: pid=2695369: Wed Apr 24 21:36:53 2024 00:22:29.832 read: IOPS=420, BW=1683KiB/s (1723kB/s)(16.4MiB/10007msec) 00:22:29.832 slat (usec): min=8, max=1117, avg=29.67, stdev=25.42 00:22:29.832 clat (msec): min=7, max=350, avg=37.85, stdev=25.93 00:22:29.832 lat (msec): min=7, max=350, avg=37.88, stdev=25.93 00:22:29.832 clat percentiles (msec): 00:22:29.832 | 1.00th=[ 20], 5.00th=[ 29], 10.00th=[ 33], 20.00th=[ 33], 00:22:29.832 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:29.832 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 40], 95.00th=[ 50], 00:22:29.832 | 99.00th=[ 199], 99.50th=[ 203], 99.90th=[ 253], 99.95th=[ 253], 00:22:29.832 | 99.99th=[ 351] 00:22:29.832 bw ( KiB/s): min= 240, max= 1952, per=4.10%, avg=1680.40, stdev=515.00, samples=20 00:22:29.832 iops : min= 60, max= 488, avg=420.10, stdev=128.75, samples=20 00:22:29.832 lat (msec) : 10=0.14%, 20=1.24%, 50=93.73%, 100=2.61%, 250=1.90% 00:22:29.832 lat (msec) : 500=0.38% 00:22:29.832 cpu : usr=95.59%, sys=2.42%, ctx=56, majf=0, minf=9 00:22:29.832 IO depths : 1=0.2%, 2=2.7%, 4=12.7%, 8=69.5%, 16=14.8%, 32=0.0%, >=64=0.0% 00:22:29.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.832 complete : 0=0.0%, 4=92.0%, 8=4.7%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.832 issued rwts: total=4210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.832 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.832 00:22:29.832 Run status group 0 (all jobs): 00:22:29.832 READ: bw=40.0MiB/s (41.9MB/s), 1683KiB/s-1735KiB/s (1723kB/s-1776kB/s), io=401MiB (420MB), run=10005-10027msec 00:22:29.832 21:36:54 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:22:29.832 21:36:54 -- target/dif.sh@43 -- # local sub 00:22:29.832 21:36:54 -- target/dif.sh@45 -- # for sub in "$@" 00:22:29.832 21:36:54 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:29.832 21:36:54 -- target/dif.sh@36 -- # local sub_id=0 00:22:29.832 21:36:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:29.832 21:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.832 21:36:54 -- common/autotest_common.sh@10 -- # set +x 00:22:29.832 21:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.832 21:36:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:29.832 21:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.832 21:36:54 -- common/autotest_common.sh@10 -- # set +x 00:22:29.832 21:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.832 21:36:54 -- target/dif.sh@45 -- # for sub in "$@" 00:22:29.832 21:36:54 -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:29.832 21:36:54 -- target/dif.sh@36 -- # local sub_id=1 00:22:29.832 21:36:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:29.832 21:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.832 21:36:54 -- common/autotest_common.sh@10 -- # set +x 00:22:29.832 21:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.832 21:36:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:29.832 21:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.832 21:36:54 -- common/autotest_common.sh@10 -- # set +x 00:22:29.832 21:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.832 21:36:54 -- target/dif.sh@45 -- # for sub in "$@" 00:22:29.832 21:36:54 -- target/dif.sh@46 -- # destroy_subsystem 2 00:22:29.832 21:36:54 -- target/dif.sh@36 -- # local sub_id=2 00:22:29.832 21:36:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:29.832 21:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.832 21:36:54 -- common/autotest_common.sh@10 -- # set +x 00:22:29.832 21:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.832 21:36:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:22:29.832 21:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.832 21:36:54 -- common/autotest_common.sh@10 -- # set +x 00:22:29.832 21:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.832 21:36:54 -- target/dif.sh@115 -- # NULL_DIF=1 00:22:29.832 21:36:54 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:22:29.832 21:36:54 -- target/dif.sh@115 -- # numjobs=2 00:22:29.832 21:36:54 -- target/dif.sh@115 -- # iodepth=8 00:22:29.832 21:36:54 -- target/dif.sh@115 -- # runtime=5 00:22:29.832 21:36:54 -- target/dif.sh@115 -- # files=1 00:22:29.832 21:36:54 -- target/dif.sh@117 -- # create_subsystems 0 1 00:22:29.832 21:36:54 -- target/dif.sh@28 -- # local sub 00:22:29.832 21:36:54 -- target/dif.sh@30 -- # for sub in "$@" 00:22:29.832 21:36:54 -- target/dif.sh@31 -- # create_subsystem 0 00:22:29.832 21:36:54 -- target/dif.sh@18 -- # local sub_id=0 00:22:29.832 21:36:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:29.832 21:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.832 21:36:54 -- common/autotest_common.sh@10 -- # set +x 00:22:29.832 bdev_null0 00:22:29.832 21:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.832 21:36:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:29.832 21:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.832 21:36:54 -- common/autotest_common.sh@10 -- # set +x 00:22:29.832 21:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.832 21:36:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:29.832 21:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.832 21:36:54 -- common/autotest_common.sh@10 -- # set +x 00:22:29.832 21:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.832 21:36:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:29.832 21:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.832 21:36:54 -- common/autotest_common.sh@10 -- # set +x 00:22:29.832 [2024-04-24 21:36:54.203597] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.832 21:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.832 21:36:54 -- target/dif.sh@30 -- # for sub in "$@" 00:22:29.832 21:36:54 -- target/dif.sh@31 -- # create_subsystem 1 00:22:29.832 21:36:54 -- target/dif.sh@18 -- # local sub_id=1 00:22:29.832 21:36:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:29.832 21:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.832 21:36:54 -- common/autotest_common.sh@10 -- # set +x 00:22:29.832 bdev_null1 00:22:29.832 21:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.832 21:36:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:29.832 21:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.832 21:36:54 -- common/autotest_common.sh@10 -- # set +x 00:22:29.832 21:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.832 21:36:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:29.832 21:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.832 21:36:54 -- common/autotest_common.sh@10 -- # set +x 00:22:29.832 21:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.832 21:36:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:29.832 21:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.832 21:36:54 -- common/autotest_common.sh@10 -- # set +x 00:22:29.832 21:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.832 21:36:54 -- target/dif.sh@118 -- # fio /dev/fd/62 00:22:29.832 21:36:54 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:22:29.832 21:36:54 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:29.832 21:36:54 -- nvmf/common.sh@521 -- # config=() 00:22:29.832 21:36:54 -- nvmf/common.sh@521 -- # local subsystem config 00:22:29.832 21:36:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:29.832 21:36:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:29.832 { 00:22:29.832 "params": { 00:22:29.832 "name": "Nvme$subsystem", 00:22:29.832 "trtype": "$TEST_TRANSPORT", 00:22:29.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.832 "adrfam": "ipv4", 00:22:29.832 "trsvcid": "$NVMF_PORT", 00:22:29.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.832 "hdgst": ${hdgst:-false}, 00:22:29.832 "ddgst": ${ddgst:-false} 00:22:29.832 }, 00:22:29.832 "method": "bdev_nvme_attach_controller" 00:22:29.832 } 00:22:29.832 EOF 00:22:29.832 )") 00:22:29.832 21:36:54 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:29.832 21:36:54 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:29.832 21:36:54 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:29.832 21:36:54 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:29.832 21:36:54 -- target/dif.sh@82 -- # gen_fio_conf 00:22:29.832 21:36:54 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:29.832 21:36:54 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:29.832 21:36:54 -- target/dif.sh@54 -- # local file 00:22:29.832 21:36:54 -- common/autotest_common.sh@1327 -- # shift 00:22:29.832 21:36:54 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:29.832 21:36:54 -- target/dif.sh@56 -- # cat 00:22:29.832 21:36:54 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:29.832 21:36:54 -- nvmf/common.sh@543 -- # cat 00:22:29.832 21:36:54 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:29.832 21:36:54 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:29.832 21:36:54 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:29.832 21:36:54 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:29.832 21:36:54 -- target/dif.sh@72 -- # (( file <= files )) 00:22:29.833 21:36:54 -- target/dif.sh@73 -- # cat 00:22:29.833 21:36:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:29.833 21:36:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:29.833 { 00:22:29.833 "params": { 00:22:29.833 "name": "Nvme$subsystem", 00:22:29.833 "trtype": "$TEST_TRANSPORT", 00:22:29.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.833 "adrfam": "ipv4", 00:22:29.833 "trsvcid": "$NVMF_PORT", 00:22:29.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.833 "hdgst": ${hdgst:-false}, 00:22:29.833 "ddgst": ${ddgst:-false} 00:22:29.833 }, 00:22:29.833 "method": "bdev_nvme_attach_controller" 00:22:29.833 } 00:22:29.833 EOF 00:22:29.833 )") 00:22:29.833 21:36:54 -- nvmf/common.sh@543 -- # cat 00:22:29.833 21:36:54 -- target/dif.sh@72 -- # (( file++ )) 00:22:29.833 21:36:54 -- target/dif.sh@72 -- # (( file <= files )) 00:22:29.833 21:36:54 -- nvmf/common.sh@545 -- # jq . 00:22:29.833 21:36:54 -- nvmf/common.sh@546 -- # IFS=, 00:22:29.833 21:36:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:29.833 "params": { 00:22:29.833 "name": "Nvme0", 00:22:29.833 "trtype": "tcp", 00:22:29.833 "traddr": "10.0.0.2", 00:22:29.833 "adrfam": "ipv4", 00:22:29.833 "trsvcid": "4420", 00:22:29.833 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:29.833 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:29.833 "hdgst": false, 00:22:29.833 "ddgst": false 00:22:29.833 }, 00:22:29.833 "method": "bdev_nvme_attach_controller" 00:22:29.833 },{ 00:22:29.833 "params": { 00:22:29.833 "name": "Nvme1", 00:22:29.833 "trtype": "tcp", 00:22:29.833 "traddr": "10.0.0.2", 00:22:29.833 "adrfam": "ipv4", 00:22:29.833 "trsvcid": "4420", 00:22:29.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:29.833 "hdgst": false, 00:22:29.833 "ddgst": false 00:22:29.833 }, 00:22:29.833 "method": "bdev_nvme_attach_controller" 00:22:29.833 }' 00:22:29.833 21:36:54 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:29.833 21:36:54 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:29.833 21:36:54 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:29.833 21:36:54 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:29.833 21:36:54 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:29.833 21:36:54 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:29.833 21:36:54 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:29.833 21:36:54 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:29.833 21:36:54 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:29.833 21:36:54 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:29.833 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:29.833 ... 00:22:29.833 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:29.833 ... 00:22:29.833 fio-3.35 00:22:29.833 Starting 4 threads 00:22:29.833 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.100 00:22:35.100 filename0: (groupid=0, jobs=1): err= 0: pid=2696868: Wed Apr 24 21:37:00 2024 00:22:35.100 read: IOPS=1970, BW=15.4MiB/s (16.1MB/s)(77.0MiB/5004msec) 00:22:35.100 slat (usec): min=3, max=2273, avg=12.59, stdev=23.49 00:22:35.100 clat (usec): min=2712, max=10693, avg=4020.71, stdev=438.55 00:22:35.100 lat (usec): min=2721, max=10710, avg=4033.31, stdev=439.70 00:22:35.100 clat percentiles (usec): 00:22:35.100 | 1.00th=[ 3064], 5.00th=[ 3523], 10.00th=[ 3687], 20.00th=[ 3851], 00:22:35.100 | 30.00th=[ 3884], 40.00th=[ 3916], 50.00th=[ 3949], 60.00th=[ 3982], 00:22:35.100 | 70.00th=[ 4015], 80.00th=[ 4178], 90.00th=[ 4359], 95.00th=[ 4883], 00:22:35.100 | 99.00th=[ 5604], 99.50th=[ 5800], 99.90th=[ 6718], 99.95th=[10552], 00:22:35.100 | 99.99th=[10683] 00:22:35.100 bw ( KiB/s): min=14544, max=16176, per=26.19%, avg=15763.20, stdev=562.50, samples=10 00:22:35.100 iops : min= 1818, max= 2022, avg=1970.40, stdev=70.31, samples=10 00:22:35.100 lat (msec) : 4=67.86%, 10=32.06%, 20=0.08% 00:22:35.100 cpu : usr=95.32%, sys=4.22%, ctx=9, majf=0, minf=47 00:22:35.100 IO depths : 1=0.1%, 2=0.9%, 4=72.6%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:35.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.100 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.100 issued rwts: total=9860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.100 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:35.100 filename0: (groupid=0, jobs=1): err= 0: pid=2696869: Wed Apr 24 21:37:00 2024 00:22:35.100 read: IOPS=1855, BW=14.5MiB/s (15.2MB/s)(72.5MiB/5002msec) 00:22:35.100 slat (nsec): min=4373, max=54429, avg=12646.93, stdev=5561.37 00:22:35.100 clat (usec): min=2553, max=8205, avg=4274.30, stdev=739.34 00:22:35.100 lat (usec): min=2563, max=8218, avg=4286.95, stdev=738.73 00:22:35.100 clat percentiles (usec): 00:22:35.100 | 1.00th=[ 3425], 5.00th=[ 3687], 10.00th=[ 3752], 20.00th=[ 3818], 00:22:35.100 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 4047], 60.00th=[ 4113], 00:22:35.100 | 70.00th=[ 4178], 80.00th=[ 4359], 90.00th=[ 5800], 95.00th=[ 5997], 00:22:35.100 | 99.00th=[ 6456], 99.50th=[ 6587], 99.90th=[ 7242], 99.95th=[ 7963], 00:22:35.100 | 99.99th=[ 8225] 00:22:35.100 bw ( KiB/s): min=14560, max=15216, per=24.66%, avg=14841.30, stdev=181.59, samples=10 00:22:35.100 iops : min= 1820, max= 1902, avg=1855.10, stdev=22.75, samples=10 00:22:35.100 lat (msec) : 4=45.64%, 10=54.36% 00:22:35.100 cpu : usr=92.20%, sys=5.68%, ctx=241, majf=0, minf=47 00:22:35.100 IO depths : 1=0.1%, 2=0.5%, 4=72.0%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:35.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.101 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.101 issued rwts: total=9280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.101 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:35.101 filename1: (groupid=0, jobs=1): err= 0: pid=2696870: Wed Apr 24 21:37:00 2024 00:22:35.101 read: IOPS=1828, BW=14.3MiB/s (15.0MB/s)(71.5MiB/5001msec) 00:22:35.101 slat (nsec): min=4429, max=50195, avg=12018.75, stdev=4870.18 00:22:35.101 clat (usec): min=2448, max=50191, avg=4337.06, stdev=1532.55 00:22:35.101 lat (usec): min=2462, max=50204, avg=4349.08, stdev=1532.08 00:22:35.101 clat percentiles (usec): 00:22:35.101 | 1.00th=[ 3326], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3818], 00:22:35.101 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 4080], 60.00th=[ 4146], 00:22:35.101 | 70.00th=[ 4228], 80.00th=[ 4621], 90.00th=[ 5735], 95.00th=[ 5932], 00:22:35.101 | 99.00th=[ 6390], 99.50th=[ 6521], 99.90th=[ 6980], 99.95th=[50070], 00:22:35.101 | 99.99th=[50070] 00:22:35.101 bw ( KiB/s): min=13456, max=15216, per=24.25%, avg=14590.22, stdev=500.20, samples=9 00:22:35.101 iops : min= 1682, max= 1902, avg=1823.78, stdev=62.53, samples=9 00:22:35.101 lat (msec) : 4=43.16%, 10=56.76%, 100=0.09% 00:22:35.101 cpu : usr=95.28%, sys=4.24%, ctx=22, majf=0, minf=40 00:22:35.101 IO depths : 1=0.3%, 2=3.0%, 4=69.0%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:35.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.101 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.101 issued rwts: total=9146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.101 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:35.101 filename1: (groupid=0, jobs=1): err= 0: pid=2696871: Wed Apr 24 21:37:00 2024 00:22:35.101 read: IOPS=1869, BW=14.6MiB/s (15.3MB/s)(73.1MiB/5003msec) 00:22:35.101 slat (nsec): min=4383, max=41745, avg=11504.84, stdev=4409.47 00:22:35.101 clat (usec): min=2649, max=8765, avg=4243.00, stdev=728.97 00:22:35.101 lat (usec): min=2666, max=8790, avg=4254.51, stdev=728.36 00:22:35.101 clat percentiles (usec): 00:22:35.101 | 1.00th=[ 3195], 5.00th=[ 3621], 10.00th=[ 3752], 20.00th=[ 3851], 00:22:35.101 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 3982], 60.00th=[ 4047], 00:22:35.101 | 70.00th=[ 4146], 80.00th=[ 4293], 90.00th=[ 5800], 95.00th=[ 5997], 00:22:35.101 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 7373], 99.95th=[ 8586], 00:22:35.101 | 99.99th=[ 8717] 00:22:35.101 bw ( KiB/s): min=14784, max=15376, per=24.85%, avg=14953.60, stdev=189.79, samples=10 00:22:35.101 iops : min= 1848, max= 1922, avg=1869.20, stdev=23.72, samples=10 00:22:35.101 lat (msec) : 4=50.95%, 10=49.05% 00:22:35.101 cpu : usr=95.36%, sys=4.00%, ctx=51, majf=0, minf=32 00:22:35.101 IO depths : 1=0.2%, 2=1.2%, 4=71.8%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:35.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.101 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.101 issued rwts: total=9354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.101 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:35.101 00:22:35.101 Run status group 0 (all jobs): 00:22:35.101 READ: bw=58.8MiB/s (61.6MB/s), 14.3MiB/s-15.4MiB/s (15.0MB/s-16.1MB/s), io=294MiB (308MB), run=5001-5004msec 00:22:35.101 21:37:00 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:22:35.101 21:37:00 -- target/dif.sh@43 -- # local sub 00:22:35.101 21:37:00 -- target/dif.sh@45 -- # for sub in "$@" 00:22:35.101 21:37:00 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:35.101 21:37:00 -- target/dif.sh@36 -- # local sub_id=0 00:22:35.101 21:37:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:35.101 21:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.101 21:37:00 -- common/autotest_common.sh@10 -- # set +x 00:22:35.101 21:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.101 21:37:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:35.101 21:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.101 21:37:00 -- common/autotest_common.sh@10 -- # set +x 00:22:35.101 21:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.101 21:37:00 -- target/dif.sh@45 -- # for sub in "$@" 00:22:35.101 21:37:00 -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:35.101 21:37:00 -- target/dif.sh@36 -- # local sub_id=1 00:22:35.101 21:37:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:35.101 21:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.101 21:37:00 -- common/autotest_common.sh@10 -- # set +x 00:22:35.101 21:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.101 21:37:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:35.101 21:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.101 21:37:00 -- common/autotest_common.sh@10 -- # set +x 00:22:35.101 21:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.101 00:22:35.101 real 0m24.271s 00:22:35.101 user 4m25.057s 00:22:35.101 sys 0m9.807s 00:22:35.101 21:37:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:35.101 21:37:00 -- common/autotest_common.sh@10 -- # set +x 00:22:35.101 ************************************ 00:22:35.101 END TEST fio_dif_rand_params 00:22:35.101 ************************************ 00:22:35.101 21:37:00 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:22:35.101 21:37:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:35.101 21:37:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:35.101 21:37:00 -- common/autotest_common.sh@10 -- # set +x 00:22:35.101 ************************************ 00:22:35.101 START TEST fio_dif_digest 00:22:35.101 ************************************ 00:22:35.101 21:37:00 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:22:35.101 21:37:00 -- target/dif.sh@123 -- # local NULL_DIF 00:22:35.101 21:37:00 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:22:35.101 21:37:00 -- target/dif.sh@125 -- # local hdgst ddgst 00:22:35.101 21:37:00 -- target/dif.sh@127 -- # NULL_DIF=3 00:22:35.101 21:37:00 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:22:35.101 21:37:00 -- target/dif.sh@127 -- # numjobs=3 00:22:35.101 21:37:00 -- target/dif.sh@127 -- # iodepth=3 00:22:35.101 21:37:00 -- target/dif.sh@127 -- # runtime=10 00:22:35.101 21:37:00 -- target/dif.sh@128 -- # hdgst=true 00:22:35.101 21:37:00 -- target/dif.sh@128 -- # ddgst=true 00:22:35.101 21:37:00 -- target/dif.sh@130 -- # create_subsystems 0 00:22:35.101 21:37:00 -- target/dif.sh@28 -- # local sub 00:22:35.101 21:37:00 -- target/dif.sh@30 -- # for sub in "$@" 00:22:35.101 21:37:00 -- target/dif.sh@31 -- # create_subsystem 0 00:22:35.101 21:37:00 -- target/dif.sh@18 -- # local sub_id=0 00:22:35.101 21:37:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:35.101 21:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.101 21:37:00 -- common/autotest_common.sh@10 -- # set +x 00:22:35.101 bdev_null0 00:22:35.101 21:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.101 21:37:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:35.101 21:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.101 21:37:00 -- common/autotest_common.sh@10 -- # set +x 00:22:35.101 21:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.101 21:37:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:35.101 21:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.101 21:37:00 -- common/autotest_common.sh@10 -- # set +x 00:22:35.101 21:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.101 21:37:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:35.101 21:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.101 21:37:00 -- common/autotest_common.sh@10 -- # set +x 00:22:35.101 [2024-04-24 21:37:00.669573] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.101 21:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.101 21:37:00 -- target/dif.sh@131 -- # fio /dev/fd/62 00:22:35.101 21:37:00 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:22:35.101 21:37:00 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:35.101 21:37:00 -- nvmf/common.sh@521 -- # config=() 00:22:35.101 21:37:00 -- nvmf/common.sh@521 -- # local subsystem config 00:22:35.101 21:37:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:35.101 21:37:00 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:35.101 21:37:00 -- target/dif.sh@82 -- # gen_fio_conf 00:22:35.101 21:37:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:35.101 { 00:22:35.101 "params": { 00:22:35.101 "name": "Nvme$subsystem", 00:22:35.101 "trtype": "$TEST_TRANSPORT", 00:22:35.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.101 "adrfam": "ipv4", 00:22:35.101 "trsvcid": "$NVMF_PORT", 00:22:35.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.101 "hdgst": ${hdgst:-false}, 00:22:35.101 "ddgst": ${ddgst:-false} 00:22:35.101 }, 00:22:35.101 "method": "bdev_nvme_attach_controller" 00:22:35.101 } 00:22:35.101 EOF 00:22:35.101 )") 00:22:35.101 21:37:00 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:35.101 21:37:00 -- target/dif.sh@54 -- # local file 00:22:35.101 21:37:00 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:35.101 21:37:00 -- target/dif.sh@56 -- # cat 00:22:35.101 21:37:00 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:35.101 21:37:00 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:35.101 21:37:00 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:35.101 21:37:00 -- common/autotest_common.sh@1327 -- # shift 00:22:35.101 21:37:00 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:35.102 21:37:00 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:35.102 21:37:00 -- nvmf/common.sh@543 -- # cat 00:22:35.102 21:37:00 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:35.102 21:37:00 -- target/dif.sh@72 -- # (( file <= files )) 00:22:35.102 21:37:00 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:35.102 21:37:00 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:35.102 21:37:00 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:35.102 21:37:00 -- nvmf/common.sh@545 -- # jq . 00:22:35.102 21:37:00 -- nvmf/common.sh@546 -- # IFS=, 00:22:35.102 21:37:00 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:35.102 "params": { 00:22:35.102 "name": "Nvme0", 00:22:35.102 "trtype": "tcp", 00:22:35.102 "traddr": "10.0.0.2", 00:22:35.102 "adrfam": "ipv4", 00:22:35.102 "trsvcid": "4420", 00:22:35.102 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:35.102 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:35.102 "hdgst": true, 00:22:35.102 "ddgst": true 00:22:35.102 }, 00:22:35.102 "method": "bdev_nvme_attach_controller" 00:22:35.102 }' 00:22:35.102 21:37:00 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:35.102 21:37:00 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:35.102 21:37:00 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:35.102 21:37:00 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:35.102 21:37:00 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:35.102 21:37:00 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:35.102 21:37:00 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:35.102 21:37:00 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:35.102 21:37:00 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:35.102 21:37:00 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:35.360 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:35.360 ... 00:22:35.360 fio-3.35 00:22:35.360 Starting 3 threads 00:22:35.360 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.629 00:22:47.629 filename0: (groupid=0, jobs=1): err= 0: pid=2697641: Wed Apr 24 21:37:11 2024 00:22:47.629 read: IOPS=144, BW=18.0MiB/s (18.9MB/s)(181MiB/10045msec) 00:22:47.629 slat (nsec): min=4614, max=35906, avg=13139.19, stdev=2975.79 00:22:47.629 clat (usec): min=7100, max=97149, avg=20764.72, stdev=11678.81 00:22:47.629 lat (usec): min=7112, max=97161, avg=20777.86, stdev=11678.81 00:22:47.629 clat percentiles (usec): 00:22:47.629 | 1.00th=[ 9372], 5.00th=[12125], 10.00th=[14091], 20.00th=[16319], 00:22:47.629 | 30.00th=[17171], 40.00th=[17695], 50.00th=[18220], 60.00th=[18482], 00:22:47.629 | 70.00th=[19006], 80.00th=[19792], 90.00th=[21365], 95.00th=[57410], 00:22:47.629 | 99.00th=[59507], 99.50th=[60556], 99.90th=[95945], 99.95th=[96994], 00:22:47.629 | 99.99th=[96994] 00:22:47.629 bw ( KiB/s): min=14080, max=23808, per=28.29%, avg=18510.60, stdev=2666.99, samples=20 00:22:47.629 iops : min= 110, max= 186, avg=144.60, stdev=20.84, samples=20 00:22:47.629 lat (msec) : 10=1.31%, 20=81.35%, 50=9.39%, 100=7.94% 00:22:47.629 cpu : usr=91.49%, sys=8.06%, ctx=21, majf=0, minf=145 00:22:47.629 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:47.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.629 issued rwts: total=1448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:47.629 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:47.629 filename0: (groupid=0, jobs=1): err= 0: pid=2697642: Wed Apr 24 21:37:11 2024 00:22:47.629 read: IOPS=175, BW=22.0MiB/s (23.1MB/s)(221MiB/10045msec) 00:22:47.629 slat (nsec): min=4354, max=66186, avg=13102.44, stdev=3415.83 00:22:47.629 clat (usec): min=8301, max=98602, avg=17012.18, stdev=6828.25 00:22:47.629 lat (usec): min=8314, max=98614, avg=17025.29, stdev=6828.30 00:22:47.629 clat percentiles (usec): 00:22:47.629 | 1.00th=[ 9241], 5.00th=[11994], 10.00th=[12780], 20.00th=[13960], 00:22:47.629 | 30.00th=[15401], 40.00th=[16057], 50.00th=[16712], 60.00th=[17171], 00:22:47.629 | 70.00th=[17433], 80.00th=[17957], 90.00th=[18744], 95.00th=[19268], 00:22:47.629 | 99.00th=[57934], 99.50th=[59507], 99.90th=[94897], 99.95th=[99091], 00:22:47.629 | 99.99th=[99091] 00:22:47.629 bw ( KiB/s): min=18688, max=26624, per=34.53%, avg=22592.00, stdev=2086.17, samples=20 00:22:47.629 iops : min= 146, max= 208, avg=176.50, stdev=16.30, samples=20 00:22:47.629 lat (msec) : 10=1.13%, 20=95.81%, 50=0.91%, 100=2.15% 00:22:47.629 cpu : usr=89.99%, sys=9.45%, ctx=26, majf=0, minf=178 00:22:47.629 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:47.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.629 issued rwts: total=1767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:47.629 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:47.629 filename0: (groupid=0, jobs=1): err= 0: pid=2697643: Wed Apr 24 21:37:11 2024 00:22:47.629 read: IOPS=191, BW=23.9MiB/s (25.1MB/s)(240MiB/10047msec) 00:22:47.629 slat (nsec): min=4510, max=39753, avg=12984.50, stdev=3006.93 00:22:47.629 clat (usec): min=6548, max=97608, avg=15650.44, stdev=7395.36 00:22:47.629 lat (usec): min=6560, max=97621, avg=15663.43, stdev=7395.32 00:22:47.629 clat percentiles (usec): 00:22:47.629 | 1.00th=[ 7504], 5.00th=[10028], 10.00th=[11207], 20.00th=[12649], 00:22:47.629 | 30.00th=[13829], 40.00th=[14615], 50.00th=[15139], 60.00th=[15533], 00:22:47.629 | 70.00th=[15926], 80.00th=[16319], 90.00th=[17171], 95.00th=[17957], 00:22:47.629 | 99.00th=[56361], 99.50th=[57410], 99.90th=[60031], 99.95th=[98042], 00:22:47.629 | 99.99th=[98042] 00:22:47.629 bw ( KiB/s): min=21248, max=28416, per=37.54%, avg=24563.20, stdev=2172.99, samples=20 00:22:47.629 iops : min= 166, max= 222, avg=191.90, stdev=16.98, samples=20 00:22:47.629 lat (msec) : 10=5.05%, 20=91.98%, 50=0.21%, 100=2.76% 00:22:47.629 cpu : usr=89.46%, sys=10.05%, ctx=17, majf=0, minf=136 00:22:47.629 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:47.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.629 issued rwts: total=1921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:47.629 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:47.629 00:22:47.629 Run status group 0 (all jobs): 00:22:47.629 READ: bw=63.9MiB/s (67.0MB/s), 18.0MiB/s-23.9MiB/s (18.9MB/s-25.1MB/s), io=642MiB (673MB), run=10045-10047msec 00:22:47.629 21:37:11 -- target/dif.sh@132 -- # destroy_subsystems 0 00:22:47.629 21:37:11 -- target/dif.sh@43 -- # local sub 00:22:47.629 21:37:11 -- target/dif.sh@45 -- # for sub in "$@" 00:22:47.629 21:37:11 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:47.629 21:37:11 -- target/dif.sh@36 -- # local sub_id=0 00:22:47.629 21:37:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:47.629 21:37:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.629 21:37:11 -- common/autotest_common.sh@10 -- # set +x 00:22:47.629 21:37:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.629 21:37:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:47.629 21:37:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.629 21:37:11 -- common/autotest_common.sh@10 -- # set +x 00:22:47.629 21:37:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.629 00:22:47.629 real 0m11.134s 00:22:47.629 user 0m28.244s 00:22:47.629 sys 0m3.028s 00:22:47.629 21:37:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:47.629 21:37:11 -- common/autotest_common.sh@10 -- # set +x 00:22:47.629 ************************************ 00:22:47.629 END TEST fio_dif_digest 00:22:47.629 ************************************ 00:22:47.629 21:37:11 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:47.629 21:37:11 -- target/dif.sh@147 -- # nvmftestfini 00:22:47.629 21:37:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:47.629 21:37:11 -- nvmf/common.sh@117 -- # sync 00:22:47.629 21:37:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:47.629 21:37:11 -- nvmf/common.sh@120 -- # set +e 00:22:47.629 21:37:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:47.629 21:37:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:47.629 rmmod nvme_tcp 00:22:47.629 rmmod nvme_fabrics 00:22:47.629 rmmod nvme_keyring 00:22:47.629 21:37:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:47.629 21:37:11 -- nvmf/common.sh@124 -- # set -e 00:22:47.629 21:37:11 -- nvmf/common.sh@125 -- # return 0 00:22:47.629 21:37:11 -- nvmf/common.sh@478 -- # '[' -n 2691410 ']' 00:22:47.629 21:37:11 -- nvmf/common.sh@479 -- # killprocess 2691410 00:22:47.629 21:37:11 -- common/autotest_common.sh@936 -- # '[' -z 2691410 ']' 00:22:47.629 21:37:11 -- common/autotest_common.sh@940 -- # kill -0 2691410 00:22:47.629 21:37:11 -- common/autotest_common.sh@941 -- # uname 00:22:47.629 21:37:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:47.629 21:37:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2691410 00:22:47.629 21:37:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:47.629 21:37:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:47.629 21:37:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2691410' 00:22:47.629 killing process with pid 2691410 00:22:47.629 21:37:11 -- common/autotest_common.sh@955 -- # kill 2691410 00:22:47.629 21:37:11 -- common/autotest_common.sh@960 -- # wait 2691410 00:22:47.629 21:37:12 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:22:47.629 21:37:12 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:47.629 Waiting for block devices as requested 00:22:47.887 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:22:47.887 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:47.887 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:48.145 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:48.145 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:48.145 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:48.402 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:48.402 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:48.402 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:48.402 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:48.402 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:48.660 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:48.660 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:48.660 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:48.660 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:48.919 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:48.920 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:48.920 21:37:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:48.920 21:37:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:48.920 21:37:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:48.920 21:37:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:48.920 21:37:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.920 21:37:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:48.920 21:37:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.452 21:37:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:51.452 00:22:51.452 real 1m7.584s 00:22:51.452 user 6m21.770s 00:22:51.452 sys 0m22.533s 00:22:51.452 21:37:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:51.452 21:37:16 -- common/autotest_common.sh@10 -- # set +x 00:22:51.452 ************************************ 00:22:51.452 END TEST nvmf_dif 00:22:51.452 ************************************ 00:22:51.452 21:37:16 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:51.452 21:37:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:51.452 21:37:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:51.452 21:37:16 -- common/autotest_common.sh@10 -- # set +x 00:22:51.452 ************************************ 00:22:51.452 START TEST nvmf_abort_qd_sizes 00:22:51.452 ************************************ 00:22:51.452 21:37:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:51.452 * Looking for test storage... 00:22:51.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:51.452 21:37:16 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:51.452 21:37:16 -- nvmf/common.sh@7 -- # uname -s 00:22:51.452 21:37:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.452 21:37:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.452 21:37:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.452 21:37:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.452 21:37:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.452 21:37:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.452 21:37:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.452 21:37:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.452 21:37:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.452 21:37:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.452 21:37:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:51.452 21:37:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:51.452 21:37:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.452 21:37:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.452 21:37:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:51.452 21:37:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.452 21:37:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:51.452 21:37:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.452 21:37:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.452 21:37:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.452 21:37:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.452 21:37:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.452 21:37:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.452 21:37:16 -- paths/export.sh@5 -- # export PATH 00:22:51.452 21:37:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.452 21:37:16 -- nvmf/common.sh@47 -- # : 0 00:22:51.452 21:37:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:51.452 21:37:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:51.452 21:37:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.452 21:37:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.452 21:37:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.452 21:37:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:51.452 21:37:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:51.452 21:37:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:51.452 21:37:16 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:22:51.452 21:37:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:51.452 21:37:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.452 21:37:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:51.452 21:37:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:51.452 21:37:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:51.452 21:37:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.452 21:37:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:51.452 21:37:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.452 21:37:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:51.452 21:37:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:51.452 21:37:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:51.452 21:37:16 -- common/autotest_common.sh@10 -- # set +x 00:22:53.352 21:37:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:53.352 21:37:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:53.352 21:37:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:53.352 21:37:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:53.352 21:37:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:53.352 21:37:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:53.352 21:37:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:53.352 21:37:18 -- nvmf/common.sh@295 -- # net_devs=() 00:22:53.352 21:37:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:53.352 21:37:18 -- nvmf/common.sh@296 -- # e810=() 00:22:53.352 21:37:18 -- nvmf/common.sh@296 -- # local -ga e810 00:22:53.352 21:37:18 -- nvmf/common.sh@297 -- # x722=() 00:22:53.352 21:37:18 -- nvmf/common.sh@297 -- # local -ga x722 00:22:53.352 21:37:18 -- nvmf/common.sh@298 -- # mlx=() 00:22:53.352 21:37:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:53.352 21:37:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:53.352 21:37:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:53.352 21:37:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:53.352 21:37:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:53.352 21:37:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:53.352 21:37:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:53.352 21:37:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:53.352 21:37:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:53.352 21:37:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:53.352 21:37:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:53.352 21:37:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:53.352 21:37:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:53.352 21:37:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:53.352 21:37:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:53.352 21:37:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:53.352 21:37:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:53.352 21:37:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:53.352 21:37:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:53.352 21:37:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:53.352 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:53.352 21:37:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:53.352 21:37:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:53.352 21:37:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.352 21:37:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.352 21:37:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:53.352 21:37:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:53.352 21:37:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:53.352 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:53.352 21:37:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:53.352 21:37:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:53.352 21:37:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.352 21:37:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.353 21:37:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:53.353 21:37:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:53.353 21:37:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:53.353 21:37:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:53.353 21:37:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:53.353 21:37:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.353 21:37:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:53.353 21:37:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.353 21:37:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:53.353 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:53.353 21:37:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.353 21:37:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:53.353 21:37:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.353 21:37:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:53.353 21:37:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.353 21:37:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:53.353 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:53.353 21:37:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.353 21:37:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:53.353 21:37:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:53.353 21:37:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:53.353 21:37:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:53.353 21:37:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:53.353 21:37:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:53.353 21:37:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:53.353 21:37:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:53.353 21:37:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:53.353 21:37:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:53.353 21:37:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:53.353 21:37:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:53.353 21:37:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:53.353 21:37:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:53.353 21:37:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:53.353 21:37:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:53.353 21:37:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:53.353 21:37:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:53.353 21:37:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:53.353 21:37:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:53.353 21:37:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:53.353 21:37:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:53.353 21:37:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:53.353 21:37:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:53.353 21:37:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:53.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:53.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:22:53.353 00:22:53.353 --- 10.0.0.2 ping statistics --- 00:22:53.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.353 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:22:53.353 21:37:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:53.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:53.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:22:53.353 00:22:53.353 --- 10.0.0.1 ping statistics --- 00:22:53.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.353 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:22:53.353 21:37:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:53.353 21:37:18 -- nvmf/common.sh@411 -- # return 0 00:22:53.353 21:37:18 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:22:53.353 21:37:18 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:54.730 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:54.730 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:54.730 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:54.730 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:54.730 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:54.730 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:54.730 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:54.730 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:54.730 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:54.730 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:54.730 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:54.730 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:54.730 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:54.730 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:54.730 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:54.730 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:55.667 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:22:55.667 21:37:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.667 21:37:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:55.667 21:37:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:55.667 21:37:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.667 21:37:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:55.667 21:37:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:55.667 21:37:21 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:22:55.667 21:37:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:55.667 21:37:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:55.667 21:37:21 -- common/autotest_common.sh@10 -- # set +x 00:22:55.667 21:37:21 -- nvmf/common.sh@470 -- # nvmfpid=2702554 00:22:55.667 21:37:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:22:55.667 21:37:21 -- nvmf/common.sh@471 -- # waitforlisten 2702554 00:22:55.667 21:37:21 -- common/autotest_common.sh@817 -- # '[' -z 2702554 ']' 00:22:55.667 21:37:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.667 21:37:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:55.667 21:37:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.667 21:37:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:55.667 21:37:21 -- common/autotest_common.sh@10 -- # set +x 00:22:55.667 [2024-04-24 21:37:21.307838] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:22:55.667 [2024-04-24 21:37:21.307932] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.667 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.926 [2024-04-24 21:37:21.381374] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:55.926 [2024-04-24 21:37:21.501018] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.926 [2024-04-24 21:37:21.501087] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.926 [2024-04-24 21:37:21.501103] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.926 [2024-04-24 21:37:21.501117] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.926 [2024-04-24 21:37:21.501129] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.926 [2024-04-24 21:37:21.501487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.926 [2024-04-24 21:37:21.501543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.926 [2024-04-24 21:37:21.501580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.926 [2024-04-24 21:37:21.501583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.860 21:37:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:56.860 21:37:22 -- common/autotest_common.sh@850 -- # return 0 00:22:56.860 21:37:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:56.860 21:37:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:56.860 21:37:22 -- common/autotest_common.sh@10 -- # set +x 00:22:56.860 21:37:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.860 21:37:22 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:22:56.860 21:37:22 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:22:56.860 21:37:22 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:22:56.860 21:37:22 -- scripts/common.sh@309 -- # local bdf bdfs 00:22:56.860 21:37:22 -- scripts/common.sh@310 -- # local nvmes 00:22:56.860 21:37:22 -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:22:56.860 21:37:22 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:22:56.860 21:37:22 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:22:56.860 21:37:22 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:22:56.860 21:37:22 -- scripts/common.sh@320 -- # uname -s 00:22:56.860 21:37:22 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:22:56.860 21:37:22 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:22:56.860 21:37:22 -- scripts/common.sh@325 -- # (( 1 )) 00:22:56.860 21:37:22 -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:22:56.860 21:37:22 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:22:56.860 21:37:22 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:22:56.860 21:37:22 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:22:56.860 21:37:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:56.860 21:37:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:56.860 21:37:22 -- common/autotest_common.sh@10 -- # set +x 00:22:56.860 ************************************ 00:22:56.860 START TEST spdk_target_abort 00:22:56.860 ************************************ 00:22:56.860 21:37:22 -- common/autotest_common.sh@1111 -- # spdk_target 00:22:56.860 21:37:22 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:22:56.860 21:37:22 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:22:56.860 21:37:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.860 21:37:22 -- common/autotest_common.sh@10 -- # set +x 00:23:00.138 spdk_targetn1 00:23:00.138 21:37:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.138 21:37:25 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:00.138 21:37:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.138 21:37:25 -- common/autotest_common.sh@10 -- # set +x 00:23:00.138 [2024-04-24 21:37:25.195548] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.138 21:37:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.138 21:37:25 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:23:00.138 21:37:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.138 21:37:25 -- common/autotest_common.sh@10 -- # set +x 00:23:00.138 21:37:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.138 21:37:25 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:23:00.138 21:37:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.138 21:37:25 -- common/autotest_common.sh@10 -- # set +x 00:23:00.138 21:37:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.138 21:37:25 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:23:00.138 21:37:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.138 21:37:25 -- common/autotest_common.sh@10 -- # set +x 00:23:00.138 [2024-04-24 21:37:25.227816] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.138 21:37:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.139 21:37:25 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:23:00.139 21:37:25 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:00.139 21:37:25 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:00.139 21:37:25 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:23:00.139 21:37:25 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:00.139 21:37:25 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:00.139 21:37:25 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:00.139 21:37:25 -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:00.139 21:37:25 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:00.139 21:37:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:00.139 21:37:25 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:00.139 21:37:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:00.139 21:37:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:00.139 21:37:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:00.139 21:37:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:23:00.139 21:37:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:00.139 21:37:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:00.139 21:37:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:00.139 21:37:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:00.139 21:37:25 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:00.139 21:37:25 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:00.139 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.414 Initializing NVMe Controllers 00:23:03.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:03.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:03.414 Initialization complete. Launching workers. 00:23:03.414 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9470, failed: 0 00:23:03.414 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2176, failed to submit 7294 00:23:03.414 success 721, unsuccess 1455, failed 0 00:23:03.414 21:37:28 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:03.414 21:37:28 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:03.414 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.955 [2024-04-24 21:37:31.547665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16fc770 is same with the state(5) to be set 00:23:06.211 Initializing NVMe Controllers 00:23:06.211 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:06.211 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:06.212 Initialization complete. Launching workers. 00:23:06.212 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8746, failed: 0 00:23:06.212 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1245, failed to submit 7501 00:23:06.212 success 319, unsuccess 926, failed 0 00:23:06.212 21:37:31 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:06.212 21:37:31 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:06.212 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.486 Initializing NVMe Controllers 00:23:09.486 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:09.486 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:09.486 Initialization complete. Launching workers. 00:23:09.486 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31236, failed: 0 00:23:09.486 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2777, failed to submit 28459 00:23:09.486 success 563, unsuccess 2214, failed 0 00:23:09.486 21:37:34 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:23:09.486 21:37:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.486 21:37:34 -- common/autotest_common.sh@10 -- # set +x 00:23:09.486 21:37:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.486 21:37:34 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:23:09.486 21:37:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.486 21:37:34 -- common/autotest_common.sh@10 -- # set +x 00:23:10.858 21:37:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.858 21:37:36 -- target/abort_qd_sizes.sh@61 -- # killprocess 2702554 00:23:10.858 21:37:36 -- common/autotest_common.sh@936 -- # '[' -z 2702554 ']' 00:23:10.858 21:37:36 -- common/autotest_common.sh@940 -- # kill -0 2702554 00:23:10.858 21:37:36 -- common/autotest_common.sh@941 -- # uname 00:23:10.858 21:37:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:10.858 21:37:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2702554 00:23:10.858 21:37:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:10.858 21:37:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:10.858 21:37:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2702554' 00:23:10.858 killing process with pid 2702554 00:23:10.858 21:37:36 -- common/autotest_common.sh@955 -- # kill 2702554 00:23:10.858 21:37:36 -- common/autotest_common.sh@960 -- # wait 2702554 00:23:10.858 00:23:10.858 real 0m14.111s 00:23:10.858 user 0m54.894s 00:23:10.858 sys 0m2.898s 00:23:10.858 21:37:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:10.858 21:37:36 -- common/autotest_common.sh@10 -- # set +x 00:23:10.858 ************************************ 00:23:10.858 END TEST spdk_target_abort 00:23:10.858 ************************************ 00:23:10.858 21:37:36 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:23:10.858 21:37:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:10.858 21:37:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:10.858 21:37:36 -- common/autotest_common.sh@10 -- # set +x 00:23:11.116 ************************************ 00:23:11.116 START TEST kernel_target_abort 00:23:11.116 ************************************ 00:23:11.116 21:37:36 -- common/autotest_common.sh@1111 -- # kernel_target 00:23:11.116 21:37:36 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:23:11.116 21:37:36 -- nvmf/common.sh@717 -- # local ip 00:23:11.116 21:37:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:11.116 21:37:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:11.116 21:37:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.117 21:37:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.117 21:37:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:11.117 21:37:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.117 21:37:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:11.117 21:37:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:11.117 21:37:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:11.117 21:37:36 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:11.117 21:37:36 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:11.117 21:37:36 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:23:11.117 21:37:36 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:11.117 21:37:36 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:11.117 21:37:36 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:11.117 21:37:36 -- nvmf/common.sh@628 -- # local block nvme 00:23:11.117 21:37:36 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:23:11.117 21:37:36 -- nvmf/common.sh@631 -- # modprobe nvmet 00:23:11.117 21:37:36 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:11.117 21:37:36 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:12.117 Waiting for block devices as requested 00:23:12.117 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:12.375 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:12.375 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:12.634 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:12.634 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:12.634 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:12.634 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:12.892 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:12.892 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:12.892 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:12.892 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:13.150 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:13.150 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:13.150 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:13.150 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:13.408 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:13.408 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:13.666 21:37:39 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:13.666 21:37:39 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:13.666 21:37:39 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:23:13.666 21:37:39 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:13.666 21:37:39 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:13.666 21:37:39 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:13.666 21:37:39 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:23:13.666 21:37:39 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:13.666 21:37:39 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:13.666 No valid GPT data, bailing 00:23:13.666 21:37:39 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:13.666 21:37:39 -- scripts/common.sh@391 -- # pt= 00:23:13.666 21:37:39 -- scripts/common.sh@392 -- # return 1 00:23:13.666 21:37:39 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:23:13.666 21:37:39 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:23:13.666 21:37:39 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:13.666 21:37:39 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:13.666 21:37:39 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:13.666 21:37:39 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:13.666 21:37:39 -- nvmf/common.sh@656 -- # echo 1 00:23:13.666 21:37:39 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:23:13.666 21:37:39 -- nvmf/common.sh@658 -- # echo 1 00:23:13.666 21:37:39 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:23:13.666 21:37:39 -- nvmf/common.sh@661 -- # echo tcp 00:23:13.666 21:37:39 -- nvmf/common.sh@662 -- # echo 4420 00:23:13.666 21:37:39 -- nvmf/common.sh@663 -- # echo ipv4 00:23:13.666 21:37:39 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:13.667 21:37:39 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:13.667 00:23:13.667 Discovery Log Number of Records 2, Generation counter 2 00:23:13.667 =====Discovery Log Entry 0====== 00:23:13.667 trtype: tcp 00:23:13.667 adrfam: ipv4 00:23:13.667 subtype: current discovery subsystem 00:23:13.667 treq: not specified, sq flow control disable supported 00:23:13.667 portid: 1 00:23:13.667 trsvcid: 4420 00:23:13.667 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:13.667 traddr: 10.0.0.1 00:23:13.667 eflags: none 00:23:13.667 sectype: none 00:23:13.667 =====Discovery Log Entry 1====== 00:23:13.667 trtype: tcp 00:23:13.667 adrfam: ipv4 00:23:13.667 subtype: nvme subsystem 00:23:13.667 treq: not specified, sq flow control disable supported 00:23:13.667 portid: 1 00:23:13.667 trsvcid: 4420 00:23:13.667 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:13.667 traddr: 10.0.0.1 00:23:13.667 eflags: none 00:23:13.667 sectype: none 00:23:13.667 21:37:39 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:23:13.667 21:37:39 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:13.667 21:37:39 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:13.667 21:37:39 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:23:13.667 21:37:39 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:13.667 21:37:39 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:13.667 21:37:39 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:13.667 21:37:39 -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:13.667 21:37:39 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:13.667 21:37:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:13.667 21:37:39 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:13.667 21:37:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:13.667 21:37:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:13.667 21:37:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:13.667 21:37:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:23:13.667 21:37:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:13.667 21:37:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:23:13.667 21:37:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:13.667 21:37:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:13.667 21:37:39 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:13.667 21:37:39 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:13.667 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.944 Initializing NVMe Controllers 00:23:16.944 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:16.944 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:16.944 Initialization complete. Launching workers. 00:23:16.944 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 26333, failed: 0 00:23:16.944 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26333, failed to submit 0 00:23:16.944 success 0, unsuccess 26333, failed 0 00:23:16.944 21:37:42 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:16.944 21:37:42 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:16.944 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.221 Initializing NVMe Controllers 00:23:20.221 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:20.221 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:20.221 Initialization complete. Launching workers. 00:23:20.221 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 54775, failed: 0 00:23:20.221 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13778, failed to submit 40997 00:23:20.221 success 0, unsuccess 13778, failed 0 00:23:20.221 21:37:45 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:20.221 21:37:45 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:20.221 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.496 Initializing NVMe Controllers 00:23:23.496 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:23.496 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:23.496 Initialization complete. Launching workers. 00:23:23.496 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 53470, failed: 0 00:23:23.496 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13342, failed to submit 40128 00:23:23.496 success 0, unsuccess 13342, failed 0 00:23:23.496 21:37:48 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:23:23.496 21:37:48 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:23.496 21:37:48 -- nvmf/common.sh@675 -- # echo 0 00:23:23.496 21:37:48 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:23.496 21:37:48 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:23.496 21:37:48 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:23.496 21:37:48 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:23.496 21:37:48 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:23:23.496 21:37:48 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:23:23.496 21:37:48 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:24.063 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:24.320 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:24.320 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:24.320 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:24.320 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:24.320 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:24.320 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:24.320 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:24.320 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:24.320 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:24.320 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:24.320 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:24.320 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:24.320 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:24.320 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:24.320 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:25.253 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:25.511 00:23:25.511 real 0m14.352s 00:23:25.511 user 0m4.382s 00:23:25.511 sys 0m3.461s 00:23:25.511 21:37:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:25.511 21:37:50 -- common/autotest_common.sh@10 -- # set +x 00:23:25.511 ************************************ 00:23:25.511 END TEST kernel_target_abort 00:23:25.511 ************************************ 00:23:25.511 21:37:50 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:25.511 21:37:50 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:23:25.511 21:37:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:25.511 21:37:50 -- nvmf/common.sh@117 -- # sync 00:23:25.511 21:37:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:25.511 21:37:50 -- nvmf/common.sh@120 -- # set +e 00:23:25.511 21:37:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:25.511 21:37:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:25.511 rmmod nvme_tcp 00:23:25.511 rmmod nvme_fabrics 00:23:25.511 rmmod nvme_keyring 00:23:25.511 21:37:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:25.511 21:37:51 -- nvmf/common.sh@124 -- # set -e 00:23:25.511 21:37:51 -- nvmf/common.sh@125 -- # return 0 00:23:25.511 21:37:51 -- nvmf/common.sh@478 -- # '[' -n 2702554 ']' 00:23:25.511 21:37:51 -- nvmf/common.sh@479 -- # killprocess 2702554 00:23:25.511 21:37:51 -- common/autotest_common.sh@936 -- # '[' -z 2702554 ']' 00:23:25.511 21:37:51 -- common/autotest_common.sh@940 -- # kill -0 2702554 00:23:25.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2702554) - No such process 00:23:25.511 21:37:51 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2702554 is not found' 00:23:25.511 Process with pid 2702554 is not found 00:23:25.511 21:37:51 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:23:25.511 21:37:51 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:26.445 Waiting for block devices as requested 00:23:26.445 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:26.703 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:26.703 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:26.961 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:26.961 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:26.961 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:26.961 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:26.961 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:27.219 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:27.219 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:27.219 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:27.479 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:27.479 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:27.479 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:27.479 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:27.738 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:27.738 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:27.738 21:37:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:27.738 21:37:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:27.738 21:37:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:27.738 21:37:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:27.738 21:37:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.738 21:37:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:27.738 21:37:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.267 21:37:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:30.267 00:23:30.267 real 0m38.727s 00:23:30.267 user 1m1.612s 00:23:30.267 sys 0m9.909s 00:23:30.267 21:37:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:30.267 21:37:55 -- common/autotest_common.sh@10 -- # set +x 00:23:30.267 ************************************ 00:23:30.267 END TEST nvmf_abort_qd_sizes 00:23:30.267 ************************************ 00:23:30.267 21:37:55 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:23:30.267 21:37:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:30.267 21:37:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:30.267 21:37:55 -- common/autotest_common.sh@10 -- # set +x 00:23:30.267 ************************************ 00:23:30.267 START TEST keyring_file 00:23:30.267 ************************************ 00:23:30.267 21:37:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:23:30.267 * Looking for test storage... 00:23:30.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:23:30.267 21:37:55 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:23:30.267 21:37:55 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:30.267 21:37:55 -- nvmf/common.sh@7 -- # uname -s 00:23:30.267 21:37:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.267 21:37:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.267 21:37:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.267 21:37:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.267 21:37:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.267 21:37:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.267 21:37:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.267 21:37:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.267 21:37:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.267 21:37:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.267 21:37:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:30.267 21:37:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:30.267 21:37:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.267 21:37:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.268 21:37:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:30.268 21:37:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.268 21:37:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:30.268 21:37:55 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.268 21:37:55 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.268 21:37:55 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.268 21:37:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.268 21:37:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.268 21:37:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.268 21:37:55 -- paths/export.sh@5 -- # export PATH 00:23:30.268 21:37:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.268 21:37:55 -- nvmf/common.sh@47 -- # : 0 00:23:30.268 21:37:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:30.268 21:37:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:30.268 21:37:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.268 21:37:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.268 21:37:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.268 21:37:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:30.268 21:37:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:30.268 21:37:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:30.268 21:37:55 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:30.268 21:37:55 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:30.268 21:37:55 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:30.268 21:37:55 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:23:30.268 21:37:55 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:23:30.268 21:37:55 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:23:30.268 21:37:55 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:30.268 21:37:55 -- keyring/common.sh@15 -- # local name key digest path 00:23:30.268 21:37:55 -- keyring/common.sh@17 -- # name=key0 00:23:30.268 21:37:55 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:30.268 21:37:55 -- keyring/common.sh@17 -- # digest=0 00:23:30.268 21:37:55 -- keyring/common.sh@18 -- # mktemp 00:23:30.268 21:37:55 -- keyring/common.sh@18 -- # path=/tmp/tmp.rnxGY5hhZO 00:23:30.268 21:37:55 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:30.268 21:37:55 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:30.268 21:37:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:30.268 21:37:55 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:30.268 21:37:55 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:23:30.268 21:37:55 -- nvmf/common.sh@693 -- # digest=0 00:23:30.268 21:37:55 -- nvmf/common.sh@694 -- # python - 00:23:30.268 21:37:55 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rnxGY5hhZO 00:23:30.268 21:37:55 -- keyring/common.sh@23 -- # echo /tmp/tmp.rnxGY5hhZO 00:23:30.268 21:37:55 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.rnxGY5hhZO 00:23:30.268 21:37:55 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:23:30.268 21:37:55 -- keyring/common.sh@15 -- # local name key digest path 00:23:30.268 21:37:55 -- keyring/common.sh@17 -- # name=key1 00:23:30.268 21:37:55 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:30.268 21:37:55 -- keyring/common.sh@17 -- # digest=0 00:23:30.268 21:37:55 -- keyring/common.sh@18 -- # mktemp 00:23:30.268 21:37:55 -- keyring/common.sh@18 -- # path=/tmp/tmp.SfvZJC76QR 00:23:30.268 21:37:55 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:30.268 21:37:55 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:30.268 21:37:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:30.268 21:37:55 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:30.268 21:37:55 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:23:30.268 21:37:55 -- nvmf/common.sh@693 -- # digest=0 00:23:30.268 21:37:55 -- nvmf/common.sh@694 -- # python - 00:23:30.268 21:37:55 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SfvZJC76QR 00:23:30.268 21:37:55 -- keyring/common.sh@23 -- # echo /tmp/tmp.SfvZJC76QR 00:23:30.268 21:37:55 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.SfvZJC76QR 00:23:30.268 21:37:55 -- keyring/file.sh@30 -- # tgtpid=2708480 00:23:30.268 21:37:55 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:23:30.268 21:37:55 -- keyring/file.sh@32 -- # waitforlisten 2708480 00:23:30.268 21:37:55 -- common/autotest_common.sh@817 -- # '[' -z 2708480 ']' 00:23:30.268 21:37:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.268 21:37:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:30.268 21:37:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.268 21:37:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:30.268 21:37:55 -- common/autotest_common.sh@10 -- # set +x 00:23:30.268 [2024-04-24 21:37:55.701585] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:23:30.268 [2024-04-24 21:37:55.701690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2708480 ] 00:23:30.268 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.268 [2024-04-24 21:37:55.758439] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.268 [2024-04-24 21:37:55.862703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.526 21:37:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:30.526 21:37:56 -- common/autotest_common.sh@850 -- # return 0 00:23:30.526 21:37:56 -- keyring/file.sh@33 -- # rpc_cmd 00:23:30.526 21:37:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.526 21:37:56 -- common/autotest_common.sh@10 -- # set +x 00:23:30.526 [2024-04-24 21:37:56.121062] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.526 null0 00:23:30.526 [2024-04-24 21:37:56.153131] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:30.526 [2024-04-24 21:37:56.153638] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:30.527 [2024-04-24 21:37:56.161140] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:30.527 21:37:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.527 21:37:56 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:30.527 21:37:56 -- common/autotest_common.sh@638 -- # local es=0 00:23:30.527 21:37:56 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:30.527 21:37:56 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:30.527 21:37:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:30.527 21:37:56 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:30.527 21:37:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:30.527 21:37:56 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:30.527 21:37:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.527 21:37:56 -- common/autotest_common.sh@10 -- # set +x 00:23:30.527 [2024-04-24 21:37:56.173183] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:23:30.527 { 00:23:30.527 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:23:30.527 "secure_channel": false, 00:23:30.527 "listen_address": { 00:23:30.527 "trtype": "tcp", 00:23:30.527 "traddr": "127.0.0.1", 00:23:30.527 "trsvcid": "4420" 00:23:30.527 }, 00:23:30.527 "method": "nvmf_subsystem_add_listener", 00:23:30.527 "req_id": 1 00:23:30.527 } 00:23:30.527 Got JSON-RPC error response 00:23:30.527 response: 00:23:30.527 { 00:23:30.527 "code": -32602, 00:23:30.527 "message": "Invalid parameters" 00:23:30.527 } 00:23:30.527 21:37:56 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:30.527 21:37:56 -- common/autotest_common.sh@641 -- # es=1 00:23:30.527 21:37:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:30.527 21:37:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:30.527 21:37:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:30.527 21:37:56 -- keyring/file.sh@46 -- # bperfpid=2708490 00:23:30.527 21:37:56 -- keyring/file.sh@48 -- # waitforlisten 2708490 /var/tmp/bperf.sock 00:23:30.527 21:37:56 -- common/autotest_common.sh@817 -- # '[' -z 2708490 ']' 00:23:30.527 21:37:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:30.527 21:37:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:30.527 21:37:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:30.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:30.527 21:37:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:30.527 21:37:56 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:23:30.527 21:37:56 -- common/autotest_common.sh@10 -- # set +x 00:23:30.785 [2024-04-24 21:37:56.222397] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:23:30.786 [2024-04-24 21:37:56.222466] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2708490 ] 00:23:30.786 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.786 [2024-04-24 21:37:56.281481] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.786 [2024-04-24 21:37:56.391697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.044 21:37:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:31.044 21:37:56 -- common/autotest_common.sh@850 -- # return 0 00:23:31.044 21:37:56 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rnxGY5hhZO 00:23:31.044 21:37:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rnxGY5hhZO 00:23:31.301 21:37:56 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.SfvZJC76QR 00:23:31.301 21:37:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.SfvZJC76QR 00:23:31.559 21:37:57 -- keyring/file.sh@51 -- # get_key key0 00:23:31.559 21:37:57 -- keyring/file.sh@51 -- # jq -r .path 00:23:31.559 21:37:57 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:31.559 21:37:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:31.559 21:37:57 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:31.816 21:37:57 -- keyring/file.sh@51 -- # [[ /tmp/tmp.rnxGY5hhZO == \/\t\m\p\/\t\m\p\.\r\n\x\G\Y\5\h\h\Z\O ]] 00:23:31.816 21:37:57 -- keyring/file.sh@52 -- # get_key key1 00:23:31.816 21:37:57 -- keyring/file.sh@52 -- # jq -r .path 00:23:31.816 21:37:57 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:31.816 21:37:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:31.816 21:37:57 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:32.074 21:37:57 -- keyring/file.sh@52 -- # [[ /tmp/tmp.SfvZJC76QR == \/\t\m\p\/\t\m\p\.\S\f\v\Z\J\C\7\6\Q\R ]] 00:23:32.074 21:37:57 -- keyring/file.sh@53 -- # get_refcnt key0 00:23:32.074 21:37:57 -- keyring/common.sh@12 -- # get_key key0 00:23:32.074 21:37:57 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:32.074 21:37:57 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:32.074 21:37:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:32.074 21:37:57 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:32.332 21:37:57 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:23:32.332 21:37:57 -- keyring/file.sh@54 -- # get_refcnt key1 00:23:32.332 21:37:57 -- keyring/common.sh@12 -- # get_key key1 00:23:32.332 21:37:57 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:32.332 21:37:57 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:32.332 21:37:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:32.332 21:37:57 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:32.332 21:37:57 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:23:32.332 21:37:57 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:32.332 21:37:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:32.591 [2024-04-24 21:37:58.213767] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:32.848 nvme0n1 00:23:32.848 21:37:58 -- keyring/file.sh@59 -- # get_refcnt key0 00:23:32.848 21:37:58 -- keyring/common.sh@12 -- # get_key key0 00:23:32.848 21:37:58 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:32.848 21:37:58 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:32.848 21:37:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:32.848 21:37:58 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:33.106 21:37:58 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:23:33.106 21:37:58 -- keyring/file.sh@60 -- # get_refcnt key1 00:23:33.106 21:37:58 -- keyring/common.sh@12 -- # get_key key1 00:23:33.106 21:37:58 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:33.106 21:37:58 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:33.106 21:37:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:33.106 21:37:58 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:33.106 21:37:58 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:23:33.106 21:37:58 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:33.363 Running I/O for 1 seconds... 00:23:34.296 00:23:34.296 Latency(us) 00:23:34.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.296 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:23:34.296 nvme0n1 : 1.02 4512.10 17.63 0.00 0.00 28149.15 6941.96 46409.20 00:23:34.296 =================================================================================================================== 00:23:34.296 Total : 4512.10 17.63 0.00 0.00 28149.15 6941.96 46409.20 00:23:34.296 0 00:23:34.296 21:37:59 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:34.296 21:37:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:34.555 21:38:00 -- keyring/file.sh@65 -- # get_refcnt key0 00:23:34.555 21:38:00 -- keyring/common.sh@12 -- # get_key key0 00:23:34.555 21:38:00 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:34.555 21:38:00 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:34.555 21:38:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:34.555 21:38:00 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:34.812 21:38:00 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:23:34.812 21:38:00 -- keyring/file.sh@66 -- # get_refcnt key1 00:23:34.812 21:38:00 -- keyring/common.sh@12 -- # get_key key1 00:23:34.812 21:38:00 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:34.812 21:38:00 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:34.812 21:38:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:34.812 21:38:00 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:35.070 21:38:00 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:23:35.070 21:38:00 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:35.070 21:38:00 -- common/autotest_common.sh@638 -- # local es=0 00:23:35.070 21:38:00 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:35.070 21:38:00 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:23:35.070 21:38:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:35.070 21:38:00 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:23:35.070 21:38:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:35.070 21:38:00 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:35.070 21:38:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:35.327 [2024-04-24 21:38:00.928819] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:35.327 [2024-04-24 21:38:00.929706] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d6af0 (107): Transport endpoint is not connected 00:23:35.327 [2024-04-24 21:38:00.930699] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d6af0 (9): Bad file descriptor 00:23:35.327 [2024-04-24 21:38:00.931697] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:35.327 [2024-04-24 21:38:00.931716] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:35.327 [2024-04-24 21:38:00.931729] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:35.327 request: 00:23:35.327 { 00:23:35.327 "name": "nvme0", 00:23:35.327 "trtype": "tcp", 00:23:35.327 "traddr": "127.0.0.1", 00:23:35.327 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:35.327 "adrfam": "ipv4", 00:23:35.327 "trsvcid": "4420", 00:23:35.327 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:35.327 "psk": "key1", 00:23:35.327 "method": "bdev_nvme_attach_controller", 00:23:35.327 "req_id": 1 00:23:35.327 } 00:23:35.327 Got JSON-RPC error response 00:23:35.327 response: 00:23:35.327 { 00:23:35.327 "code": -32602, 00:23:35.327 "message": "Invalid parameters" 00:23:35.327 } 00:23:35.327 21:38:00 -- common/autotest_common.sh@641 -- # es=1 00:23:35.327 21:38:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:35.327 21:38:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:35.327 21:38:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:35.327 21:38:00 -- keyring/file.sh@71 -- # get_refcnt key0 00:23:35.327 21:38:00 -- keyring/common.sh@12 -- # get_key key0 00:23:35.327 21:38:00 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:35.327 21:38:00 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:35.327 21:38:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:35.328 21:38:00 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:35.585 21:38:01 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:23:35.585 21:38:01 -- keyring/file.sh@72 -- # get_refcnt key1 00:23:35.585 21:38:01 -- keyring/common.sh@12 -- # get_key key1 00:23:35.585 21:38:01 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:35.585 21:38:01 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:35.585 21:38:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:35.585 21:38:01 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:35.842 21:38:01 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:23:35.842 21:38:01 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:23:35.842 21:38:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:36.100 21:38:01 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:23:36.100 21:38:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:23:36.357 21:38:01 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:23:36.357 21:38:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:36.357 21:38:01 -- keyring/file.sh@77 -- # jq length 00:23:36.614 21:38:02 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:23:36.614 21:38:02 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.rnxGY5hhZO 00:23:36.614 21:38:02 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.rnxGY5hhZO 00:23:36.614 21:38:02 -- common/autotest_common.sh@638 -- # local es=0 00:23:36.614 21:38:02 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.rnxGY5hhZO 00:23:36.614 21:38:02 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:23:36.614 21:38:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:36.614 21:38:02 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:23:36.614 21:38:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:36.614 21:38:02 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rnxGY5hhZO 00:23:36.614 21:38:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rnxGY5hhZO 00:23:36.872 [2024-04-24 21:38:02.405441] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rnxGY5hhZO': 0100660 00:23:36.872 [2024-04-24 21:38:02.405482] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:36.872 request: 00:23:36.872 { 00:23:36.872 "name": "key0", 00:23:36.872 "path": "/tmp/tmp.rnxGY5hhZO", 00:23:36.872 "method": "keyring_file_add_key", 00:23:36.872 "req_id": 1 00:23:36.872 } 00:23:36.872 Got JSON-RPC error response 00:23:36.872 response: 00:23:36.872 { 00:23:36.872 "code": -1, 00:23:36.872 "message": "Operation not permitted" 00:23:36.872 } 00:23:36.872 21:38:02 -- common/autotest_common.sh@641 -- # es=1 00:23:36.872 21:38:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:36.872 21:38:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:36.872 21:38:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:36.872 21:38:02 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.rnxGY5hhZO 00:23:36.872 21:38:02 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rnxGY5hhZO 00:23:36.872 21:38:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rnxGY5hhZO 00:23:37.130 21:38:02 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.rnxGY5hhZO 00:23:37.130 21:38:02 -- keyring/file.sh@88 -- # get_refcnt key0 00:23:37.130 21:38:02 -- keyring/common.sh@12 -- # get_key key0 00:23:37.130 21:38:02 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:37.130 21:38:02 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:37.130 21:38:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:37.130 21:38:02 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:37.388 21:38:02 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:23:37.388 21:38:02 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:37.388 21:38:02 -- common/autotest_common.sh@638 -- # local es=0 00:23:37.388 21:38:02 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:37.388 21:38:02 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:23:37.388 21:38:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:37.388 21:38:02 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:23:37.388 21:38:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:37.388 21:38:02 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:37.388 21:38:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:37.651 [2024-04-24 21:38:03.151518] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.rnxGY5hhZO': No such file or directory 00:23:37.651 [2024-04-24 21:38:03.151558] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:23:37.651 [2024-04-24 21:38:03.151589] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:23:37.651 [2024-04-24 21:38:03.151603] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:37.651 [2024-04-24 21:38:03.151625] bdev_nvme.c:6191:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:23:37.651 request: 00:23:37.651 { 00:23:37.651 "name": "nvme0", 00:23:37.651 "trtype": "tcp", 00:23:37.651 "traddr": "127.0.0.1", 00:23:37.651 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:37.651 "adrfam": "ipv4", 00:23:37.651 "trsvcid": "4420", 00:23:37.651 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:37.651 "psk": "key0", 00:23:37.651 "method": "bdev_nvme_attach_controller", 00:23:37.651 "req_id": 1 00:23:37.651 } 00:23:37.651 Got JSON-RPC error response 00:23:37.651 response: 00:23:37.651 { 00:23:37.651 "code": -19, 00:23:37.651 "message": "No such device" 00:23:37.651 } 00:23:37.651 21:38:03 -- common/autotest_common.sh@641 -- # es=1 00:23:37.651 21:38:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:37.651 21:38:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:37.651 21:38:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:37.651 21:38:03 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:23:37.651 21:38:03 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:37.908 21:38:03 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:37.908 21:38:03 -- keyring/common.sh@15 -- # local name key digest path 00:23:37.908 21:38:03 -- keyring/common.sh@17 -- # name=key0 00:23:37.908 21:38:03 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:37.908 21:38:03 -- keyring/common.sh@17 -- # digest=0 00:23:37.908 21:38:03 -- keyring/common.sh@18 -- # mktemp 00:23:37.908 21:38:03 -- keyring/common.sh@18 -- # path=/tmp/tmp.7LMHacAN2i 00:23:37.908 21:38:03 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:37.908 21:38:03 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:37.908 21:38:03 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:37.908 21:38:03 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:37.908 21:38:03 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:23:37.908 21:38:03 -- nvmf/common.sh@693 -- # digest=0 00:23:37.908 21:38:03 -- nvmf/common.sh@694 -- # python - 00:23:37.908 21:38:03 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.7LMHacAN2i 00:23:37.908 21:38:03 -- keyring/common.sh@23 -- # echo /tmp/tmp.7LMHacAN2i 00:23:37.908 21:38:03 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.7LMHacAN2i 00:23:37.908 21:38:03 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7LMHacAN2i 00:23:37.908 21:38:03 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7LMHacAN2i 00:23:38.166 21:38:03 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:38.166 21:38:03 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:38.424 nvme0n1 00:23:38.424 21:38:04 -- keyring/file.sh@99 -- # get_refcnt key0 00:23:38.424 21:38:04 -- keyring/common.sh@12 -- # get_key key0 00:23:38.424 21:38:04 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:38.424 21:38:04 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:38.424 21:38:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:38.424 21:38:04 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:38.681 21:38:04 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:23:38.681 21:38:04 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:23:38.681 21:38:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:38.939 21:38:04 -- keyring/file.sh@101 -- # get_key key0 00:23:38.939 21:38:04 -- keyring/file.sh@101 -- # jq -r .removed 00:23:38.939 21:38:04 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:38.939 21:38:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:38.939 21:38:04 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:39.197 21:38:04 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:23:39.197 21:38:04 -- keyring/file.sh@102 -- # get_refcnt key0 00:23:39.197 21:38:04 -- keyring/common.sh@12 -- # get_key key0 00:23:39.197 21:38:04 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:39.197 21:38:04 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:39.197 21:38:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:39.197 21:38:04 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:39.455 21:38:04 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:23:39.455 21:38:04 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:39.455 21:38:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:39.713 21:38:05 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:23:39.713 21:38:05 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:39.713 21:38:05 -- keyring/file.sh@104 -- # jq length 00:23:39.970 21:38:05 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:23:39.970 21:38:05 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7LMHacAN2i 00:23:39.970 21:38:05 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7LMHacAN2i 00:23:40.227 21:38:05 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.SfvZJC76QR 00:23:40.227 21:38:05 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.SfvZJC76QR 00:23:40.485 21:38:05 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:40.485 21:38:05 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:40.742 nvme0n1 00:23:40.742 21:38:06 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:23:40.742 21:38:06 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:23:40.999 21:38:06 -- keyring/file.sh@112 -- # config='{ 00:23:40.999 "subsystems": [ 00:23:40.999 { 00:23:40.999 "subsystem": "keyring", 00:23:40.999 "config": [ 00:23:40.999 { 00:23:40.999 "method": "keyring_file_add_key", 00:23:40.999 "params": { 00:23:40.999 "name": "key0", 00:23:40.999 "path": "/tmp/tmp.7LMHacAN2i" 00:23:40.999 } 00:23:40.999 }, 00:23:40.999 { 00:23:40.999 "method": "keyring_file_add_key", 00:23:40.999 "params": { 00:23:40.999 "name": "key1", 00:23:40.999 "path": "/tmp/tmp.SfvZJC76QR" 00:23:40.999 } 00:23:40.999 } 00:23:40.999 ] 00:23:41.000 }, 00:23:41.000 { 00:23:41.000 "subsystem": "iobuf", 00:23:41.000 "config": [ 00:23:41.000 { 00:23:41.000 "method": "iobuf_set_options", 00:23:41.000 "params": { 00:23:41.000 "small_pool_count": 8192, 00:23:41.000 "large_pool_count": 1024, 00:23:41.000 "small_bufsize": 8192, 00:23:41.000 "large_bufsize": 135168 00:23:41.000 } 00:23:41.000 } 00:23:41.000 ] 00:23:41.000 }, 00:23:41.000 { 00:23:41.000 "subsystem": "sock", 00:23:41.000 "config": [ 00:23:41.000 { 00:23:41.000 "method": "sock_impl_set_options", 00:23:41.000 "params": { 00:23:41.000 "impl_name": "posix", 00:23:41.000 "recv_buf_size": 2097152, 00:23:41.000 "send_buf_size": 2097152, 00:23:41.000 "enable_recv_pipe": true, 00:23:41.000 "enable_quickack": false, 00:23:41.000 "enable_placement_id": 0, 00:23:41.000 "enable_zerocopy_send_server": true, 00:23:41.000 "enable_zerocopy_send_client": false, 00:23:41.000 "zerocopy_threshold": 0, 00:23:41.000 "tls_version": 0, 00:23:41.000 "enable_ktls": false 00:23:41.000 } 00:23:41.000 }, 00:23:41.000 { 00:23:41.000 "method": "sock_impl_set_options", 00:23:41.000 "params": { 00:23:41.000 "impl_name": "ssl", 00:23:41.000 "recv_buf_size": 4096, 00:23:41.000 "send_buf_size": 4096, 00:23:41.000 "enable_recv_pipe": true, 00:23:41.000 "enable_quickack": false, 00:23:41.000 "enable_placement_id": 0, 00:23:41.000 "enable_zerocopy_send_server": true, 00:23:41.000 "enable_zerocopy_send_client": false, 00:23:41.000 "zerocopy_threshold": 0, 00:23:41.000 "tls_version": 0, 00:23:41.000 "enable_ktls": false 00:23:41.000 } 00:23:41.000 } 00:23:41.000 ] 00:23:41.000 }, 00:23:41.000 { 00:23:41.000 "subsystem": "vmd", 00:23:41.000 "config": [] 00:23:41.000 }, 00:23:41.000 { 00:23:41.000 "subsystem": "accel", 00:23:41.000 "config": [ 00:23:41.000 { 00:23:41.000 "method": "accel_set_options", 00:23:41.000 "params": { 00:23:41.000 "small_cache_size": 128, 00:23:41.000 "large_cache_size": 16, 00:23:41.000 "task_count": 2048, 00:23:41.000 "sequence_count": 2048, 00:23:41.000 "buf_count": 2048 00:23:41.000 } 00:23:41.000 } 00:23:41.000 ] 00:23:41.000 }, 00:23:41.000 { 00:23:41.000 "subsystem": "bdev", 00:23:41.000 "config": [ 00:23:41.000 { 00:23:41.000 "method": "bdev_set_options", 00:23:41.000 "params": { 00:23:41.000 "bdev_io_pool_size": 65535, 00:23:41.000 "bdev_io_cache_size": 256, 00:23:41.000 "bdev_auto_examine": true, 00:23:41.000 "iobuf_small_cache_size": 128, 00:23:41.000 "iobuf_large_cache_size": 16 00:23:41.000 } 00:23:41.000 }, 00:23:41.000 { 00:23:41.000 "method": "bdev_raid_set_options", 00:23:41.000 "params": { 00:23:41.000 "process_window_size_kb": 1024 00:23:41.000 } 00:23:41.000 }, 00:23:41.000 { 00:23:41.000 "method": "bdev_iscsi_set_options", 00:23:41.000 "params": { 00:23:41.000 "timeout_sec": 30 00:23:41.000 } 00:23:41.000 }, 00:23:41.000 { 00:23:41.000 "method": "bdev_nvme_set_options", 00:23:41.000 "params": { 00:23:41.000 "action_on_timeout": "none", 00:23:41.000 "timeout_us": 0, 00:23:41.000 "timeout_admin_us": 0, 00:23:41.000 "keep_alive_timeout_ms": 10000, 00:23:41.000 "arbitration_burst": 0, 00:23:41.000 "low_priority_weight": 0, 00:23:41.000 "medium_priority_weight": 0, 00:23:41.000 "high_priority_weight": 0, 00:23:41.000 "nvme_adminq_poll_period_us": 10000, 00:23:41.000 "nvme_ioq_poll_period_us": 0, 00:23:41.000 "io_queue_requests": 512, 00:23:41.000 "delay_cmd_submit": true, 00:23:41.000 "transport_retry_count": 4, 00:23:41.000 "bdev_retry_count": 3, 00:23:41.000 "transport_ack_timeout": 0, 00:23:41.000 "ctrlr_loss_timeout_sec": 0, 00:23:41.000 "reconnect_delay_sec": 0, 00:23:41.000 "fast_io_fail_timeout_sec": 0, 00:23:41.000 "disable_auto_failback": false, 00:23:41.000 "generate_uuids": false, 00:23:41.000 "transport_tos": 0, 00:23:41.000 "nvme_error_stat": false, 00:23:41.000 "rdma_srq_size": 0, 00:23:41.000 "io_path_stat": false, 00:23:41.000 "allow_accel_sequence": false, 00:23:41.000 "rdma_max_cq_size": 0, 00:23:41.000 "rdma_cm_event_timeout_ms": 0, 00:23:41.000 "dhchap_digests": [ 00:23:41.000 "sha256", 00:23:41.000 "sha384", 00:23:41.000 "sha512" 00:23:41.000 ], 00:23:41.000 "dhchap_dhgroups": [ 00:23:41.000 "null", 00:23:41.000 "ffdhe2048", 00:23:41.000 "ffdhe3072", 00:23:41.000 "ffdhe4096", 00:23:41.000 "ffdhe6144", 00:23:41.000 "ffdhe8192" 00:23:41.000 ] 00:23:41.000 } 00:23:41.000 }, 00:23:41.000 { 00:23:41.000 "method": "bdev_nvme_attach_controller", 00:23:41.000 "params": { 00:23:41.000 "name": "nvme0", 00:23:41.000 "trtype": "TCP", 00:23:41.000 "adrfam": "IPv4", 00:23:41.000 "traddr": "127.0.0.1", 00:23:41.000 "trsvcid": "4420", 00:23:41.000 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:41.000 "prchk_reftag": false, 00:23:41.000 "prchk_guard": false, 00:23:41.000 "ctrlr_loss_timeout_sec": 0, 00:23:41.000 "reconnect_delay_sec": 0, 00:23:41.000 "fast_io_fail_timeout_sec": 0, 00:23:41.000 "psk": "key0", 00:23:41.000 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:41.000 "hdgst": false, 00:23:41.000 "ddgst": false 00:23:41.000 } 00:23:41.000 }, 00:23:41.000 { 00:23:41.000 "method": "bdev_nvme_set_hotplug", 00:23:41.000 "params": { 00:23:41.000 "period_us": 100000, 00:23:41.000 "enable": false 00:23:41.000 } 00:23:41.000 }, 00:23:41.000 { 00:23:41.000 "method": "bdev_wait_for_examine" 00:23:41.000 } 00:23:41.000 ] 00:23:41.000 }, 00:23:41.000 { 00:23:41.000 "subsystem": "nbd", 00:23:41.000 "config": [] 00:23:41.000 } 00:23:41.000 ] 00:23:41.000 }' 00:23:41.000 21:38:06 -- keyring/file.sh@114 -- # killprocess 2708490 00:23:41.000 21:38:06 -- common/autotest_common.sh@936 -- # '[' -z 2708490 ']' 00:23:41.000 21:38:06 -- common/autotest_common.sh@940 -- # kill -0 2708490 00:23:41.000 21:38:06 -- common/autotest_common.sh@941 -- # uname 00:23:41.000 21:38:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:41.000 21:38:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2708490 00:23:41.000 21:38:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:41.000 21:38:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:41.000 21:38:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2708490' 00:23:41.000 killing process with pid 2708490 00:23:41.000 21:38:06 -- common/autotest_common.sh@955 -- # kill 2708490 00:23:41.000 Received shutdown signal, test time was about 1.000000 seconds 00:23:41.000 00:23:41.000 Latency(us) 00:23:41.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.000 =================================================================================================================== 00:23:41.000 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:41.000 21:38:06 -- common/autotest_common.sh@960 -- # wait 2708490 00:23:41.258 21:38:06 -- keyring/file.sh@117 -- # bperfpid=2709955 00:23:41.258 21:38:06 -- keyring/file.sh@119 -- # waitforlisten 2709955 /var/tmp/bperf.sock 00:23:41.258 21:38:06 -- common/autotest_common.sh@817 -- # '[' -z 2709955 ']' 00:23:41.258 21:38:06 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:23:41.258 21:38:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:41.258 21:38:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:41.258 21:38:06 -- keyring/file.sh@115 -- # echo '{ 00:23:41.258 "subsystems": [ 00:23:41.258 { 00:23:41.258 "subsystem": "keyring", 00:23:41.258 "config": [ 00:23:41.258 { 00:23:41.258 "method": "keyring_file_add_key", 00:23:41.258 "params": { 00:23:41.258 "name": "key0", 00:23:41.258 "path": "/tmp/tmp.7LMHacAN2i" 00:23:41.258 } 00:23:41.258 }, 00:23:41.258 { 00:23:41.258 "method": "keyring_file_add_key", 00:23:41.258 "params": { 00:23:41.258 "name": "key1", 00:23:41.258 "path": "/tmp/tmp.SfvZJC76QR" 00:23:41.258 } 00:23:41.258 } 00:23:41.258 ] 00:23:41.258 }, 00:23:41.258 { 00:23:41.258 "subsystem": "iobuf", 00:23:41.258 "config": [ 00:23:41.258 { 00:23:41.258 "method": "iobuf_set_options", 00:23:41.258 "params": { 00:23:41.258 "small_pool_count": 8192, 00:23:41.258 "large_pool_count": 1024, 00:23:41.258 "small_bufsize": 8192, 00:23:41.258 "large_bufsize": 135168 00:23:41.258 } 00:23:41.258 } 00:23:41.258 ] 00:23:41.258 }, 00:23:41.258 { 00:23:41.258 "subsystem": "sock", 00:23:41.258 "config": [ 00:23:41.258 { 00:23:41.258 "method": "sock_impl_set_options", 00:23:41.258 "params": { 00:23:41.258 "impl_name": "posix", 00:23:41.258 "recv_buf_size": 2097152, 00:23:41.258 "send_buf_size": 2097152, 00:23:41.258 "enable_recv_pipe": true, 00:23:41.258 "enable_quickack": false, 00:23:41.258 "enable_placement_id": 0, 00:23:41.258 "enable_zerocopy_send_server": true, 00:23:41.258 "enable_zerocopy_send_client": false, 00:23:41.258 "zerocopy_threshold": 0, 00:23:41.258 "tls_version": 0, 00:23:41.258 "enable_ktls": false 00:23:41.258 } 00:23:41.258 }, 00:23:41.258 { 00:23:41.258 "method": "sock_impl_set_options", 00:23:41.258 "params": { 00:23:41.258 "impl_name": "ssl", 00:23:41.258 "recv_buf_size": 4096, 00:23:41.258 "send_buf_size": 4096, 00:23:41.258 "enable_recv_pipe": true, 00:23:41.258 "enable_quickack": false, 00:23:41.258 "enable_placement_id": 0, 00:23:41.258 "enable_zerocopy_send_server": true, 00:23:41.258 "enable_zerocopy_send_client": false, 00:23:41.258 "zerocopy_threshold": 0, 00:23:41.258 "tls_version": 0, 00:23:41.258 "enable_ktls": false 00:23:41.258 } 00:23:41.258 } 00:23:41.258 ] 00:23:41.258 }, 00:23:41.258 { 00:23:41.258 "subsystem": "vmd", 00:23:41.258 "config": [] 00:23:41.258 }, 00:23:41.258 { 00:23:41.258 "subsystem": "accel", 00:23:41.258 "config": [ 00:23:41.258 { 00:23:41.258 "method": "accel_set_options", 00:23:41.258 "params": { 00:23:41.258 "small_cache_size": 128, 00:23:41.258 "large_cache_size": 16, 00:23:41.258 "task_count": 2048, 00:23:41.258 "sequence_count": 2048, 00:23:41.258 "buf_count": 2048 00:23:41.258 } 00:23:41.258 } 00:23:41.258 ] 00:23:41.258 }, 00:23:41.258 { 00:23:41.258 "subsystem": "bdev", 00:23:41.258 "config": [ 00:23:41.258 { 00:23:41.258 "method": "bdev_set_options", 00:23:41.258 "params": { 00:23:41.258 "bdev_io_pool_size": 65535, 00:23:41.258 "bdev_io_cache_size": 256, 00:23:41.258 "bdev_auto_examine": true, 00:23:41.258 "iobuf_small_cache_size": 128, 00:23:41.258 "iobuf_large_cache_size": 16 00:23:41.258 } 00:23:41.258 }, 00:23:41.258 { 00:23:41.258 "method": "bdev_raid_set_options", 00:23:41.258 "params": { 00:23:41.258 "process_window_size_kb": 1024 00:23:41.258 } 00:23:41.258 }, 00:23:41.258 { 00:23:41.258 "method": "bdev_iscsi_set_options", 00:23:41.258 "params": { 00:23:41.258 "timeout_sec": 30 00:23:41.258 } 00:23:41.258 }, 00:23:41.258 { 00:23:41.258 "method": "bdev_nvme_set_options", 00:23:41.258 "params": { 00:23:41.258 "action_on_timeout": "none", 00:23:41.258 "timeout_us": 0, 00:23:41.258 "timeout_admin_us": 0, 00:23:41.258 "keep_alive_timeout_ms": 10000, 00:23:41.258 "arbitration_burst": 0, 00:23:41.258 "low_priority_weight": 0, 00:23:41.258 "medium_priority_weight": 0, 00:23:41.258 "high_priority_weight": 0, 00:23:41.258 "nvme_adminq_poll_period_us": 10000, 00:23:41.258 "nvme_ioq_poll_period_us": 0, 00:23:41.258 "io_queue_requests": 512, 00:23:41.258 "delay_cmd_submit": true, 00:23:41.258 "transport_retry_count": 4, 00:23:41.258 "bdev_retry_count": 3, 00:23:41.258 "transport_ack_timeout": 0, 00:23:41.258 "ctrlr_loss_timeout_sec": 0, 00:23:41.258 "reconnect_delay_sec": 0, 00:23:41.258 "fast_io_fail_timeout_sec": 0, 00:23:41.258 "disable_auto_failback": false, 00:23:41.258 "generate_uuids": false, 00:23:41.258 "transport_tos": 0, 00:23:41.258 "nvme_error_stat": false, 00:23:41.258 "rdma_srq_size": 0, 00:23:41.258 "io_path_stat": false, 00:23:41.258 "allow_accel_sequence": false, 00:23:41.258 "rdma_max_cq_size": 0, 00:23:41.258 "rdma_cm_event_timeout_ms": 0, 00:23:41.258 "dhchap_digests": [ 00:23:41.258 "sha256", 00:23:41.258 "sha384", 00:23:41.258 "sha512" 00:23:41.258 ], 00:23:41.258 "dhchap_dhgroups": [ 00:23:41.258 "null", 00:23:41.258 "ffdhe2048", 00:23:41.258 "ffdhe3072", 00:23:41.258 "ffdhe4096", 00:23:41.258 "ffdhe6144", 00:23:41.258 "ffdhe8192" 00:23:41.258 ] 00:23:41.258 } 00:23:41.258 }, 00:23:41.258 { 00:23:41.258 "method": "bdev_nvme_attach_controller", 00:23:41.258 "params": { 00:23:41.258 "name": "nvme0", 00:23:41.258 "trtype": "TCP", 00:23:41.258 "adrfam": "IPv4", 00:23:41.258 "traddr": "127.0.0.1", 00:23:41.258 "trsvcid": "4420", 00:23:41.258 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:41.258 "prchk_reftag": false, 00:23:41.258 "prchk_guard": false, 00:23:41.258 "ctrlr_loss_timeout_sec": 0, 00:23:41.258 "reconnect_delay_sec": 0, 00:23:41.258 "fast_io_fail_timeout_sec": 0, 00:23:41.258 "psk": "key0", 00:23:41.258 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:41.258 "hdgst": false, 00:23:41.258 "ddgst": false 00:23:41.258 } 00:23:41.258 }, 00:23:41.258 { 00:23:41.258 "method": "bdev_nvme_set_hotplug", 00:23:41.258 "params": { 00:23:41.258 "period_us": 100000, 00:23:41.258 "enable": false 00:23:41.258 } 00:23:41.258 }, 00:23:41.258 { 00:23:41.258 "method": "bdev_wait_for_examine" 00:23:41.258 } 00:23:41.258 ] 00:23:41.258 }, 00:23:41.258 { 00:23:41.258 "subsystem": "nbd", 00:23:41.258 "config": [] 00:23:41.258 } 00:23:41.258 ] 00:23:41.258 }' 00:23:41.258 21:38:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:41.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:41.258 21:38:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:41.258 21:38:06 -- common/autotest_common.sh@10 -- # set +x 00:23:41.258 [2024-04-24 21:38:06.893731] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:23:41.259 [2024-04-24 21:38:06.893813] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2709955 ] 00:23:41.259 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.516 [2024-04-24 21:38:06.956123] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.516 [2024-04-24 21:38:07.072106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.811 [2024-04-24 21:38:07.260726] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:42.413 21:38:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:42.413 21:38:07 -- common/autotest_common.sh@850 -- # return 0 00:23:42.413 21:38:07 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:23:42.413 21:38:07 -- keyring/file.sh@120 -- # jq length 00:23:42.413 21:38:07 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:42.413 21:38:08 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:23:42.413 21:38:08 -- keyring/file.sh@121 -- # get_refcnt key0 00:23:42.413 21:38:08 -- keyring/common.sh@12 -- # get_key key0 00:23:42.413 21:38:08 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:42.413 21:38:08 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:42.413 21:38:08 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:42.413 21:38:08 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:42.670 21:38:08 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:23:42.670 21:38:08 -- keyring/file.sh@122 -- # get_refcnt key1 00:23:42.670 21:38:08 -- keyring/common.sh@12 -- # get_key key1 00:23:42.670 21:38:08 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:42.670 21:38:08 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:42.670 21:38:08 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:42.670 21:38:08 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:42.927 21:38:08 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:23:42.927 21:38:08 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:23:42.927 21:38:08 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:23:42.927 21:38:08 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:23:43.184 21:38:08 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:23:43.185 21:38:08 -- keyring/file.sh@1 -- # cleanup 00:23:43.185 21:38:08 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.7LMHacAN2i /tmp/tmp.SfvZJC76QR 00:23:43.185 21:38:08 -- keyring/file.sh@20 -- # killprocess 2709955 00:23:43.185 21:38:08 -- common/autotest_common.sh@936 -- # '[' -z 2709955 ']' 00:23:43.185 21:38:08 -- common/autotest_common.sh@940 -- # kill -0 2709955 00:23:43.185 21:38:08 -- common/autotest_common.sh@941 -- # uname 00:23:43.185 21:38:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:43.185 21:38:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2709955 00:23:43.185 21:38:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:43.185 21:38:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:43.185 21:38:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2709955' 00:23:43.185 killing process with pid 2709955 00:23:43.185 21:38:08 -- common/autotest_common.sh@955 -- # kill 2709955 00:23:43.185 Received shutdown signal, test time was about 1.000000 seconds 00:23:43.185 00:23:43.185 Latency(us) 00:23:43.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.185 =================================================================================================================== 00:23:43.185 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:43.185 21:38:08 -- common/autotest_common.sh@960 -- # wait 2709955 00:23:43.442 21:38:09 -- keyring/file.sh@21 -- # killprocess 2708480 00:23:43.442 21:38:09 -- common/autotest_common.sh@936 -- # '[' -z 2708480 ']' 00:23:43.442 21:38:09 -- common/autotest_common.sh@940 -- # kill -0 2708480 00:23:43.442 21:38:09 -- common/autotest_common.sh@941 -- # uname 00:23:43.442 21:38:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:43.442 21:38:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2708480 00:23:43.442 21:38:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:43.442 21:38:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:43.442 21:38:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2708480' 00:23:43.442 killing process with pid 2708480 00:23:43.442 21:38:09 -- common/autotest_common.sh@955 -- # kill 2708480 00:23:43.442 [2024-04-24 21:38:09.095892] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:43.442 21:38:09 -- common/autotest_common.sh@960 -- # wait 2708480 00:23:44.007 00:23:44.007 real 0m14.006s 00:23:44.007 user 0m34.371s 00:23:44.007 sys 0m3.147s 00:23:44.007 21:38:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:44.007 21:38:09 -- common/autotest_common.sh@10 -- # set +x 00:23:44.007 ************************************ 00:23:44.007 END TEST keyring_file 00:23:44.007 ************************************ 00:23:44.007 21:38:09 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:23:44.007 21:38:09 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:23:44.007 21:38:09 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:23:44.007 21:38:09 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:23:44.007 21:38:09 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:44.007 21:38:09 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:23:44.007 21:38:09 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:44.007 21:38:09 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:23:44.007 21:38:09 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:23:44.007 21:38:09 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:23:44.007 21:38:09 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:23:44.007 21:38:09 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:23:44.007 21:38:09 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:23:44.007 21:38:09 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:23:44.007 21:38:09 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:23:44.007 21:38:09 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:23:44.007 21:38:09 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:23:44.008 21:38:09 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:23:44.008 21:38:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:44.008 21:38:09 -- common/autotest_common.sh@10 -- # set +x 00:23:44.008 21:38:09 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:23:44.008 21:38:09 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:23:44.008 21:38:09 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:23:44.008 21:38:09 -- common/autotest_common.sh@10 -- # set +x 00:23:45.908 INFO: APP EXITING 00:23:45.908 INFO: killing all VMs 00:23:45.908 INFO: killing vhost app 00:23:45.908 INFO: EXIT DONE 00:23:46.842 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:23:46.842 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:23:46.842 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:23:46.842 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:23:47.100 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:23:47.100 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:23:47.100 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:23:47.100 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:23:47.100 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:23:47.100 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:23:47.100 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:23:47.100 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:23:47.100 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:23:47.100 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:23:47.100 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:23:47.100 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:23:47.100 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:23:48.472 Cleaning 00:23:48.472 Removing: /var/run/dpdk/spdk0/config 00:23:48.473 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:48.473 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:48.473 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:48.473 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:48.473 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:23:48.473 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:23:48.473 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:23:48.473 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:23:48.473 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:48.473 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:48.473 Removing: /var/run/dpdk/spdk1/config 00:23:48.473 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:23:48.473 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:23:48.473 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:23:48.473 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:23:48.473 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:23:48.473 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:23:48.473 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:23:48.473 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:23:48.473 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:23:48.473 Removing: /var/run/dpdk/spdk1/hugepage_info 00:23:48.473 Removing: /var/run/dpdk/spdk1/mp_socket 00:23:48.473 Removing: /var/run/dpdk/spdk2/config 00:23:48.473 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:23:48.473 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:23:48.473 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:23:48.473 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:23:48.473 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:23:48.473 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:23:48.473 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:23:48.473 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:23:48.473 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:23:48.473 Removing: /var/run/dpdk/spdk2/hugepage_info 00:23:48.473 Removing: /var/run/dpdk/spdk3/config 00:23:48.473 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:23:48.473 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:23:48.473 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:23:48.473 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:23:48.473 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:23:48.473 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:23:48.473 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:23:48.473 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:23:48.473 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:23:48.473 Removing: /var/run/dpdk/spdk3/hugepage_info 00:23:48.473 Removing: /var/run/dpdk/spdk4/config 00:23:48.473 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:23:48.473 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:23:48.473 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:23:48.473 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:23:48.473 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:23:48.473 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:23:48.473 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:23:48.473 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:23:48.473 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:23:48.473 Removing: /var/run/dpdk/spdk4/hugepage_info 00:23:48.473 Removing: /dev/shm/bdev_svc_trace.1 00:23:48.473 Removing: /dev/shm/nvmf_trace.0 00:23:48.473 Removing: /dev/shm/spdk_tgt_trace.pid2479787 00:23:48.473 Removing: /var/run/dpdk/spdk0 00:23:48.473 Removing: /var/run/dpdk/spdk1 00:23:48.473 Removing: /var/run/dpdk/spdk2 00:23:48.473 Removing: /var/run/dpdk/spdk3 00:23:48.473 Removing: /var/run/dpdk/spdk4 00:23:48.473 Removing: /var/run/dpdk/spdk_pid2478058 00:23:48.473 Removing: /var/run/dpdk/spdk_pid2478945 00:23:48.473 Removing: /var/run/dpdk/spdk_pid2479787 00:23:48.473 Removing: /var/run/dpdk/spdk_pid2480391 00:23:48.473 Removing: /var/run/dpdk/spdk_pid2481084 00:23:48.473 Removing: /var/run/dpdk/spdk_pid2481230 00:23:48.473 Removing: /var/run/dpdk/spdk_pid2482072 00:23:48.473 Removing: /var/run/dpdk/spdk_pid2482086 00:23:48.473 Removing: /var/run/dpdk/spdk_pid2482343 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2484159 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2485091 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2485405 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2485603 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2485815 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2486140 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2486309 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2486469 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2486716 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2487246 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2489611 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2489787 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2489958 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2490080 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2490399 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2490527 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2490845 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2490980 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2491273 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2491285 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2491457 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2491591 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2491966 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2492137 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2492454 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2492639 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2492675 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2492883 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2493047 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2493331 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2493499 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2493778 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2493944 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2494123 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2494393 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2494561 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2494836 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2495006 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2495286 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2495456 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2495616 00:23:48.731 Removing: /var/run/dpdk/spdk_pid2495899 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2496067 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2496346 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2496518 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2496800 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2496965 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2497146 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2497331 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2497556 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2499755 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2527283 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2529820 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2535570 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2538997 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2541371 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2541884 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2549174 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2549213 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2549829 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2550572 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2551152 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2552170 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2552178 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2552437 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2552455 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2552472 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2553114 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2553776 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2554398 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2554749 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2554841 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2554980 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2556006 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2556862 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2562250 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2562527 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2565186 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2569033 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2571094 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2577636 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2583078 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2584782 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2585451 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2595794 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2598027 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2600953 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2602140 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2603454 00:23:48.732 Removing: /var/run/dpdk/spdk_pid2603504 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2603611 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2603751 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2604181 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2605458 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2606237 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2606563 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2608294 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2608720 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2609287 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2611811 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2618338 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2621109 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2624895 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2625982 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2627087 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2629639 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2632128 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2636369 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2636371 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2639276 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2639413 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2639551 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2639817 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2639878 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2642453 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2642781 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2645447 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2647307 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2650863 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2654161 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2659136 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2659146 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2671224 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2671764 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2672296 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2672712 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2673420 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2673830 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2674240 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2674655 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2677158 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2677416 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2681218 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2681280 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2683010 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2688055 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2688062 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2691597 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2693014 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2694424 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2695165 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2696693 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2697558 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2702993 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2703381 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2703779 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2705349 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2705623 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2706024 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2708480 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2708490 00:23:48.990 Removing: /var/run/dpdk/spdk_pid2709955 00:23:48.990 Clean 00:23:49.249 21:38:14 -- common/autotest_common.sh@1437 -- # return 0 00:23:49.249 21:38:14 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:23:49.249 21:38:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:49.249 21:38:14 -- common/autotest_common.sh@10 -- # set +x 00:23:49.249 21:38:14 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:23:49.249 21:38:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:49.249 21:38:14 -- common/autotest_common.sh@10 -- # set +x 00:23:49.249 21:38:14 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:23:49.249 21:38:14 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:23:49.249 21:38:14 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:23:49.249 21:38:14 -- spdk/autotest.sh@389 -- # hash lcov 00:23:49.249 21:38:14 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:23:49.249 21:38:14 -- spdk/autotest.sh@391 -- # hostname 00:23:49.249 21:38:14 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:23:49.507 geninfo: WARNING: invalid characters removed from testname! 00:24:16.048 21:38:41 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:20.275 21:38:45 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:22.807 21:38:48 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:26.090 21:38:51 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:28.625 21:38:53 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:31.176 21:38:56 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:34.463 21:38:59 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:34.463 21:38:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:34.463 21:38:59 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:24:34.463 21:38:59 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:34.463 21:38:59 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:34.463 21:38:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.463 21:38:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.463 21:38:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.463 21:38:59 -- paths/export.sh@5 -- $ export PATH 00:24:34.463 21:38:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.463 21:38:59 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:24:34.463 21:38:59 -- common/autobuild_common.sh@435 -- $ date +%s 00:24:34.463 21:38:59 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713987539.XXXXXX 00:24:34.463 21:38:59 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713987539.4cLSbM 00:24:34.463 21:38:59 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:24:34.463 21:38:59 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:24:34.463 21:38:59 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:24:34.463 21:38:59 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:24:34.463 21:38:59 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:24:34.463 21:38:59 -- common/autobuild_common.sh@451 -- $ get_config_params 00:24:34.463 21:38:59 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:24:34.463 21:38:59 -- common/autotest_common.sh@10 -- $ set +x 00:24:34.463 21:38:59 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:24:34.463 21:38:59 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:24:34.463 21:38:59 -- pm/common@17 -- $ local monitor 00:24:34.463 21:38:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:34.463 21:38:59 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2718680 00:24:34.463 21:38:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:34.463 21:38:59 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2718682 00:24:34.463 21:38:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:34.463 21:38:59 -- pm/common@21 -- $ date +%s 00:24:34.463 21:38:59 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2718684 00:24:34.463 21:38:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:34.463 21:38:59 -- pm/common@21 -- $ date +%s 00:24:34.463 21:38:59 -- pm/common@21 -- $ date +%s 00:24:34.463 21:38:59 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2718687 00:24:34.463 21:38:59 -- pm/common@26 -- $ sleep 1 00:24:34.463 21:38:59 -- pm/common@21 -- $ date +%s 00:24:34.463 21:38:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713987539 00:24:34.463 21:38:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713987539 00:24:34.463 21:38:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713987539 00:24:34.463 21:38:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713987539 00:24:34.463 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713987539_collect-vmstat.pm.log 00:24:34.463 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713987539_collect-bmc-pm.bmc.pm.log 00:24:34.463 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713987539_collect-cpu-load.pm.log 00:24:34.463 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713987539_collect-cpu-temp.pm.log 00:24:35.031 21:39:00 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:24:35.031 21:39:00 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:24:35.031 21:39:00 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:35.031 21:39:00 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:24:35.031 21:39:00 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:24:35.031 21:39:00 -- spdk/autopackage.sh@19 -- $ timing_finish 00:24:35.031 21:39:00 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:35.031 21:39:00 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:24:35.031 21:39:00 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:24:35.031 21:39:00 -- spdk/autopackage.sh@20 -- $ exit 0 00:24:35.031 21:39:00 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:24:35.031 21:39:00 -- pm/common@30 -- $ signal_monitor_resources TERM 00:24:35.031 21:39:00 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:24:35.031 21:39:00 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:35.031 21:39:00 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:24:35.031 21:39:00 -- pm/common@45 -- $ pid=2718699 00:24:35.031 21:39:00 -- pm/common@52 -- $ sudo kill -TERM 2718699 00:24:35.031 21:39:00 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:35.031 21:39:00 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:24:35.031 21:39:00 -- pm/common@45 -- $ pid=2718700 00:24:35.031 21:39:00 -- pm/common@52 -- $ sudo kill -TERM 2718700 00:24:35.031 21:39:00 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:35.031 21:39:00 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:24:35.031 21:39:00 -- pm/common@45 -- $ pid=2718701 00:24:35.031 21:39:00 -- pm/common@52 -- $ sudo kill -TERM 2718701 00:24:35.031 21:39:00 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:35.031 21:39:00 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:24:35.031 21:39:00 -- pm/common@45 -- $ pid=2718698 00:24:35.031 21:39:00 -- pm/common@52 -- $ sudo kill -TERM 2718698 00:24:35.031 + [[ -n 2395152 ]] 00:24:35.031 + sudo kill 2395152 00:24:35.041 [Pipeline] } 00:24:35.059 [Pipeline] // stage 00:24:35.065 [Pipeline] } 00:24:35.083 [Pipeline] // timeout 00:24:35.089 [Pipeline] } 00:24:35.106 [Pipeline] // catchError 00:24:35.112 [Pipeline] } 00:24:35.130 [Pipeline] // wrap 00:24:35.137 [Pipeline] } 00:24:35.153 [Pipeline] // catchError 00:24:35.162 [Pipeline] stage 00:24:35.164 [Pipeline] { (Epilogue) 00:24:35.179 [Pipeline] catchError 00:24:35.181 [Pipeline] { 00:24:35.195 [Pipeline] echo 00:24:35.197 Cleanup processes 00:24:35.203 [Pipeline] sh 00:24:35.489 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:35.489 2718822 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:24:35.489 2719007 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:35.503 [Pipeline] sh 00:24:35.785 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:35.785 ++ grep -v 'sudo pgrep' 00:24:35.785 ++ awk '{print $1}' 00:24:35.785 + sudo kill -9 2718822 00:24:35.796 [Pipeline] sh 00:24:36.113 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:44.246 [Pipeline] sh 00:24:44.528 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:44.528 Artifacts sizes are good 00:24:44.542 [Pipeline] archiveArtifacts 00:24:44.548 Archiving artifacts 00:24:44.741 [Pipeline] sh 00:24:45.026 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:24:45.042 [Pipeline] cleanWs 00:24:45.052 [WS-CLEANUP] Deleting project workspace... 00:24:45.052 [WS-CLEANUP] Deferred wipeout is used... 00:24:45.059 [WS-CLEANUP] done 00:24:45.061 [Pipeline] } 00:24:45.081 [Pipeline] // catchError 00:24:45.092 [Pipeline] sh 00:24:45.373 + logger -p user.info -t JENKINS-CI 00:24:45.380 [Pipeline] } 00:24:45.392 [Pipeline] // stage 00:24:45.397 [Pipeline] } 00:24:45.409 [Pipeline] // node 00:24:45.412 [Pipeline] End of Pipeline 00:24:45.431 Finished: SUCCESS